var/home/core/zuul-output/0000755000175000017500000000000015136423636014536 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136433270015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000240333315136433107020261 0ustar corecoreG6zikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD  ?+6b}Wߟ/nm͊wqɻlOzN_ P??xI[mEy},fۮWe~7Nû/wb~1;ZxsY~ݳ( 2[$7۫j{Zw鶾z?&~|XLXlN_/:oXx$%X"LADA@@tkޕf{5Wbx=@^J})K3x~JkwI|YowS˷j̶֛]/8 N Rm(of`\r\L>{Jm 0{vR̍>dQQT9%]˜(O)X(6I;Ff"mcI۫d@FNsdxό?2$&tg*Y%\ߘfDP'F%Ab*d@e˛H,љ:72 2ƴ40tr>PYD'vt'oI¢w}o٬owko%gQ(%t#NL֜ eh&Ƨ,RH 4*,!SD 1Ed_wkxdL3FV>%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.Pe8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!fJ'͇3[n )ܗKj/jUSsȕD $([LH%xa1yrOd.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&Atsck vz(vb$^Nyo$p[DtUCE9sUS@''mhSt6"+ҶT M6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7O=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY˖E ;;,tTaIUle*$!>*mBA2,gJIn_kSz)JC]?X(OPJS3.}clݨ{e!MB,cB߮4af祋,1/_xq=fBRO0P'֫-kbM6Apw,GO2}MGK'#+սE^dˋf6Y bQEz}eҏnr_ ^O^W zw~Ȳ=sXअy{E|rd31V_Uqb0t/ %I[hq ҕ  O U^>wY~ -`%Űb`XS38W!`znOF7/.C!Pu&Jm l ?Q>}O+D7 P=8! ЛN_[d0Yݎ@2!vZ{Ibi/^cygwpГzY'Ź$[fr;)ٖf ՠ5Kcxg* E Qu{$Sڸ7p~ 5ClkmSAW"X Pҽ.xpyV ]̒KX9U1>V6.W%GX kUvzg=^IvwS-~y2gXFzȁnrSA8;P=luYCr/27OUa-~0{{oL3r:l{`r(3{ xRP8_S( $?uOk|mbP\vە晋cLz6r~CMp!~~hojUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lwۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dچΦ'/lf@ Vi`D~ƇUeوO>Yߛ. (<2y. "=8YA)>Z[3D\dEB n+0Z ?$s{ 6(E|<ޭMk1Yn(F!ieEn"@+z+]ũrDk^F\`%Di5~F*sS?{2l%v+jkRcepΏ}Y s\F5ʶRq%]X-Rl&X:qd5y &;)VKL]Z q3?8 Lmc [2;\dͽ$Mu>WmCEQabAJ;`uy-u.M!9VsWٔo RS`R#m8k[(UAX6q )  k_ CaL=n*2*q:}Jtᐜ*8 .A/|\ǔ%&$]!YINJ2]:a0OWI.pHU𢛲`Rs _I@U8jxɵͽf3[Pg%,IR-Ř`QbcobH&ClvLҼé!qvGkJ+u7Τ!=ljK1SpHR!:YbNqǪP@o`co <عۀGvm۵8[,)4\\=־?Cn._) cxF;Gu'n [&8NJPjeyY!UPۮu2Ja>0e0Vwގj$]p^翍+|Z}B9{bOe@7'XL:OEUn|23 ZC]pj|4iBN _7C$h3`|Ulk\)WV`JBr^J |rt~Uv1=UF6"tseT$U>LNÔkXrFscC0 X%E+o*tNU*׫t[=EAV$=q[mhm"ro5jq$j!;V0eOޝ4ccc2JWN>7q;aEI"*S k`u yg[~S [jӋksE.,5 "ka+n!e߭lɹ՚f 2tk)„+A= 2䃔A:G\ ` \vc"Kjm61O Px"3Qc /[?$ PW*3GX lqv-6W&)cX |];C%sXY2%£5@/~+ɇ~t#s1pVʂ4L tׁG- zM2X^"[r>$9{[?- &\j7 )UJwq:ǫf!NRT1D(7.8Qӆ?O+JL0SU$jfˬ!މZ|VA/6ȍ&XLE4atGS1px"S}MFڇ˦NJPYDX%\ꡗhl}i9f?q>*1ZVƭD:&+ED>M-'[0AG꠭y*s)/=rnl{ӕЩsdLyVIuUI':8^6$ t,O6lb 2ɀ,Eʿk܍fK58$58?DX 4q}ll9WyU D=0.37}:xI/ió21׭ȦS!e^tk28b$dhLXULi7/1-Qk%ƩJ4^plme;u>6nQe UZA *^Vif]>HUd6ƕ̽=&`\g2v%tJ4[7g"zw1|\*& >V! :VNS7[{ݶ%[Ngpn][#&ߓTo_ =2A/a<`'"u8V䝏)@+\ mYj%ȳlX]i@djӃfb -u-w~ rCplK;ֽnlmu[`wd'\2L&T *描O '+Eg&DqJc5;OU!F>jStK㫕ce0!Y-+cxύK-HW2:UM*cor@>! U~R:ߙ!IZ5>H;0ޤ:\Tq]_\_>e˲\oUQ\Wߋ47WwߋKpwSSۘV,nC.\UߋoVEuY]^VW0R=<ު˜˻ x}[ێ'|;c^ Mۣ >ve'Z|V+Bs=8'kP 3 |}44S8UPi7f;VWnBq~MefZV>:/?Ac 1M'I`v22؆DT!+Z=p%'3JQK-͗R?KkΤOq,*I|0]c L򱷬e[{X6"X#.^倕#}2w͏O +Px SPp.,?Uv|$8U cJF"Jq2c/;&)Wx?,-XNî_$;Jb<"dM䢝jSU%)4wU5(Y4.< ]UJQUR]ikHcW)2-vz<"UUa[꺹1u7+nO/yPH>4~$;k)*wU) )kb59Owc=쪺0utU!Sh&vOJ㹱{OxLOt㲞ex'9~er;iqH2SRmNIY S7(-//C0—W\4"e0"#1rMÎ\ݞ2"?|Y\Y!2^WFoDGts"!j]bn,iҚir4/qF{@VA9_Ei='_JemP!Fm 6}[1B6jfb'PX<2a.XƣS!F)ד[ˋ4>')k]a/y:E5YZ`xSAZYHiR}KZO:n[u7#_0>) Y11Q6-.:3KuVh]$`BpK..y_8(lY-ƣ#SVwz:#} Bdltc{cׄxZ6 h1% 7NQ ĵ 2hDB,qu*_ֳ>w_r x6'\YDyeE붾1/ҢwӀE?d6YW?LVB[`΋bofĈY:)|pap k_Bzy~$0AmlXҀln>=789!!4,\Yn ћh]ʣA 6:'xɴo\wV ;W)Ia } BOIrʡ> }@EIMGq]ZTv L4@n!q\X2VVSQm.Xg":N8N͝\ w #kK9$eSG%! d>7$ujxs|4T4F6_ly;ћcm.>L|uΉ̧X߲g͌hsYe\,wRSU\--& T0\Û>Mݐ}gY erxz3pn9.hpt[!4 &ԖYDbp# u}蠊f\M NJ(KAUh`6 qC-e#"8̷Xl(KmR؇s\G'?0kg=$;5}|:7~ ^B:$Ibj vE,u<mՖ26aFVYTiFRH"beq{5"ZSߣhߊzRm|Q);MQsT b6G[05Ck<{ kX+pv5W#?!utVHSRϾtXt<3o9$H3G޳@ I|?@@A\ȹ'̈X1gV&$ۆj([-km}A܈˘( g˫=SjxDKk[ڊVyu"f54lh\_CmNaAi34W O Xz?~j`J \@5&1ti-벺uʒu&3m%byt3Sn  z4ʆ4_&D8|Cl ȣ,,VZKyQTʷ#tBazHXb}hn jʖ;b)%% KJTӱT̡C -ķVSTsa+xfE[s!Jԡ9K +[% eIXb OOWX<&9'xqC˴}PLAj 0pBR;2vDJp$uOXĢnJVSe;ViaHo ll/ ̳]5m-oЈ:k0 gPIhO65Zs˶KJEVTWwj5cOՍ{.5yz&urӮJqRbNcdž V$Qy|}*R1uWJgC_vug>K~5 }UCBd'I|\n[rֈUג S,L#h0` TX+N|6{Zv%ukhZ|RD6ǺưXUϘ鸯-˱vi g1аl'4ȏ3faв"c8L*.JC]#[kofUW)l=u;(sK"jIQ"[IK& jEVCe|U["OVC$HSrY"(fIfi>4]gM+*MⴃID篅e.FOץllElD_Y@*+U4ѥ"1vݎÓW9Uy$(i^e} cU+рXz@W(f0](*rX "wU14 E m T ċGwo3ɍ.v 5 "aI7cOփq 4KVv%~ca, EXg*-m=U59cؘbRqmN^e*?ʣ9:@EQNݚ6lM]^r&2Q]Wy-G?N^n|An< |xMZwɕe jVDzFӳHr `]ipE*T \\ "U7eӫO!`#҃+cewQ-G˴$b{Ij/^j-uZ<+0@}fBޝa5[6W_ t346^P|>Ҧc?.@AzGy"ۢۙlD%E#yY %P8lDc6l A9[2v 6x@@KXvܜPgIS%< 08v\WPg0{q7bq|8`$d=CDaoۄ.[= {03[1vlYw{̠欬gECb<!o |јێ),܂%%#gf#g `m"O(FM"({F! ڎE#COySR혴Pm`|'X"n Be>/jx6f8DO$lՌgX|Ma6eayr)-T?KdvZ6bqk@"LCֶ^FPO4*'؝&TMMtNr9 :nBP&lLBBA,df|d,-,.M5A-#b i%73 +7.2w _7c:qX/nW6\m]g^&;\ݨcvg,W1qvQ^?{Wܶ g^(9_u:f")DHɎϓv$[ %b߽S~e43,zaׄO9>Yշ/e^q,#? Ct7;Wo.dVU?d 'b</St9=lNC:GU՗Oc28EtmRӒ{dܱ:٭Xq{WƏ, ?g}~R'?&p9t]8JE ~7Ȱlݕn1nu瀀NMǝ3) (\Q82Aew)!]90}0'5 4t=٨+uLxE^W2d߁5[:GEDn7Jе9sa# q$B qjP,8Uv'v-@i_=] {K@^V7FnG"  A,KKu /A߁C}#C9d/yG*'=!޵ՁrCw)+9݋CTeI:C8<ځԙNiw20f9_' Ш9\?B fHt):GqMW `9tyqL{rw*SDrjNpQ+$g޹HNu N5ή'ʩFx"ۅ$Nu :Q]ԷyVpt'w%֚3 ~QׁpXgšK>8N+rQ`eRJWv NdpwU t&73o 73vga?%eKr2/xsSrvECEo6 4Z⇉>] :U(Z 8"e!HV"Ղvjt>O^^6S#E4D8X,N^/KJE@$ !3{$/y `PH %+ E"no4O 押4᣼91 lyqy l0?ostކ?O1.:=A.ZpW2dRS?vs>7rۋxRHaPE<6$qŠָ8eY.lLd8ɢDX  وZNFB 5Rt ,D_Xiy,)۷=v=1Yh?EU}my`M. 8n\ s PڋvnjO(Ծ|=a6rJIi9ru gD٠m{K׻ |n"Ex>OҕZ %V{󘋾g[zSg 6wxf^;Y4B3|DT,% ^ʾ(ce|&s0B¼ȱ&O`$fWq``N8 4FmCYy\y ^ B -)V] mz5Z*tV8J6ڷ6puB؏R9L•o'ߎ<'ۖr۳[;`Yv0B=;L]ncNK (t} j=;V]e֚~ # y_L #`&/e;t[$ w۶ś/]:t3/{Qr,~C~`=4Y~цۤ/?/U>5eB e/#H(ۂPkPk B ^F#˄[joO2B Y&قPg{BH2꾌PwGB- ۞Pez;mA(_&oA(ߞP2B']%j.C,7򧼸ω8/BoQq xku^\">M "eUQ9}6-3x0{Rɳ>Hz'})w_{`;O:5ЃXggt>st,wik0&=|W5_k$`ZfUJ~g$`}SCb  N <>?mDk.WtFW+Uz>X}&H iY|jϓ ǙE??Ti?ˬ~Cm!hU`*ŸznǨr¨\hjO6KjaU.~Y*&ON u*) #`( 2 hߏ vu o Eѯ`b~%> Q<Ĺ,=%#4 -F$)w~}J(H}x 't0uWE*N?˂ɇ6kn6(ǡA ig,wk|<^_sˢttP pvY&°k/@X5'leV{ kcxXo,4$(_|ph<cX] BLt #cC(NbV geEl"SU/C }@&|?oTuw ʄCl/B#;D8;g@LlR  cg=i ҳ64 QgIK6@94ZqV07"Mf z <5 Y}Oh}nVw<0W7vsFE x2hQ*mtmj%Sv(W p6{&i4=Z(M \cإV峄X{Ceza࠰+Fwl0E 2 Q¨j&BoRaHia@/h,76(%"LQ>{f9nU!tD'txf5P8 7(z%!)W!pPey%KO ZFDgF!SKÀVh73)`J%"#:u#6.&%D]HN1) a(~@'F6[XAM|UXB;Y(2GtmTTNllD Mzu!vUabqp?vE:d>_ Fك e!ɨ>Xc?XɏlvEW}v8L%#4}c?=fB}1* G.ߕufB=O_ԣ,_)sC(gb/![I8ݯ6цq fD0ûL\dF?>]_!ސͲ \VJ+~Ϊ똼/GL8JS]V&FZ$PǨp'F_sl7)P~0A׃|F?fN =DJmDE53QhCGkJR<dU\ezp ')L0KF(=.0 :z6,8+M {xP3NkqJ/ʧMw7q(T+yW+)0#ſŏkPhPԄ)55G'" vu~Zd8/+k@lRe'TD"}dskپ>XfzHRjܶN ZN~"¿$Ms^vry*N§!,T`.V UN^0*|/Mol wBբuøk8~,a)%i_i v@hc2R{dt ߇}y{&5q_Wf}b0%E0Vtn\ G "hhS+v$/J>wZee\#bt &88j< W}@ڲrZ*+%o3=pikx*r]߷,_G~Ũ&ndkbY \!4s- KmN:njR*σn6ZD1W)9=u(0hA\ !Kn#:=.}Z(==PM/xrWWg߮Lf V.`_WZ{Z BkX-LTիHBPe:C~1X01.2rNm},YiSt;+s4jd|DP2r w7D_)6sLh/BeX\[u_fDrH4a8Mܡ"%2u :#ƑN c0OӔqB }#/(%?y62O^K4׉6?.Q e>IFZ-_Cìu.B{GJ6}Qw5nB{ѻ)Ǔb2-rT7zXR ݠiGA(7>d_=4r!| ͣqOЁ-Gw FZ0bc1OX/.R/eF{Iyiœ~NU|an^{SI8zy2م$j]:mXIwem I*qGa0 W%HZu(%v&dK@ϑqc@;/> 7H~{?QlGCc< Ƴ-#]7Sy=Q.ځ }7Rxպ)6:AM"5[!/10m O=1=~Uw] b*^Kj`\|3?j _(Q zwzwς K .wIT薠126B.w߲`m` )V\3>Jl_(j*p䨳VKl1U\D();5;6J"D|zmKqpa)bـ8݁KVe y0OɀqY{hȥ &,'uŠ01pɼ ДȢղ M D98GIgV%n*h |$o8]X;ō6B .[ ޴**ō&7^[nYplPre&J#,~۫5̀ ;Lya`7!X<1;曓1f0VHjw+ U"RpAV7;-ULUvcQ 2iÂ[sNAϪ^6/R715C*s/,lS5l Cרt$*!Eҙ>:lER4E>Y}~w:H=7 9<{CTh"2}!4HteZU<5V!ТV}11QvO[FqU;<&}pVoΏ^j];MUS$突%Y Mz(1a*R; <$8lB66@hU92GIQCk~ R]u,8Bt^K|{`\48oo`C(b ӬTt0km,NI RR|7?/f~.L_}P,8ƹa.ceKВFC\F${93(ivBrBVC= r]U/QHp809c\jQYؚ6޴j}Ip/G'bdS6<YmHEһ91guwxHww0ç܃*sT=RL{zYT`n͓ekBl,ȃE]OG o<'w]ڴ$R:Aa3 r o7ws3ixnK1h"dAK'Q\IKHm[W>Rz7Gw vn.x&z=y|\tercɝ#HR{ƚ-Fq9.M) 4)pƹPKu4R}Dj i|da5LbJHK_$[H:S;DG&Zo| lrRsI89{kjˋdf)XN (tkPJ^xBt26 IKO\{-zEf1rVjw/,il?Et$*SFMݮ,$pU'!yd>G&X9$4i:WDT+r1*F,Q$>\NV@8لz^C=ePkFKB*P~.̄]ݱLn ǃ*@R\UW{(Bi)dn6ԅ:iF'=rCoimSdK:%Y!8ٙtc0r2v?ԡXtshT@$!@t 4LtNMS^;K';8OZ(%8>=?Yp]FULҭ~pb6DGw1CUDIYsOOF4bѲZζ**"8AҔS-qN[0 TZҕɩꩽIgn nr6Ny `-^$jɤt\^Y4[M5":Q yUxL{i~zWڈY}۵ nj+zщ E;%EUψ[f3!OJKAe.D 2gzsRv8<8v)pԘQxUI%|S:%kIKiv;޳8oym-4'0*Fj  +^jIt>D[+ ^U q^VY9ow'6+u<)RVU~-݂dR>& LH#pIoU'LLy2R*4*ʂ,ɻ(/; )bӞQڱQ/'ϋvNoevzj]?p֋hj;aL@%^yf.9ֿԘhm#Uqۧ(埿}yN/E%7p/]!m"hMU|]6Ui mq!^d rq|KբMtZNhV+0c6ҡsH:' l2zcj/ILIY ^~YUM|tYxdV r]w#a:NZ Vh|n(5V/MN*=W>^9#~5Բ\0UpKP>k|y|^}ZW9cZ>RHIDphT^OuuaR#nH:V-tLIPOût,۔2Y,&LeL^f )'gɦ?ٶ%5 z!/rus2s;j2P4 UX*j[ A_`-l)-mX0ʛMYHKxF*\ŭkdf\Fjt0I*aǮV%V;Nv /%(o{ޠhy8J|ǿIEvZO>EŻ\'v,' Z}U'{O)џc׻Qr^HqIT"kQx"Ig8ߗ jSh}Zk '*}qml6 9javsz˼}fމ[N2 A[@#a:K 'D)F\ahwI 'bgU4+ha􋃌#,Ȅ i{9 evĎm Q>(bmVKX/STsZKYU(UՖOғ9zE'N3 ~͢$B6;e꾃A?8,U8s6JNtjr'/B T]ۿ1(H)Ÿ'`#g/8JxJHUUNv8P*g3=-'wOg4uE2'mؔ`ٿ>s͍]24^ 9@^NNR(;H.f얐rez2)ZqYZT魬At>痁ÁF "hfk^^s%؜Y Gk-w^9s/.CEMd G$'omȝA MC_flM苂+sđ!Za*\бKN3z%lb6%R$O>m6پͅ~<52Fs+ze5V_ˣaR.qF1r1LH{LJQİcDeWP?kץ(ԚĮZiK7O ݛgC{g N y#ǖ71؛]mm=h b卬"!IjDP k1焼1]ShRb)G/-r*#3Y}:U}{6vnfז~75xg];s%aA0^+JdS_$J#є,Ve+" 4?4_7},"V $˼ƒ`Q\R6Sm=ep:/B+ɴ\VA E6黉⫞o4I? m$5[{-ƟMv㋞^0Md8 RɢbH۳?u@nSV`5Nyh~wN~y5|ű?8g8 FOxJ߯'l[Z|^Ր1QaԎg7Eqa{砸MV?;ժ_N "t=Mo<1JkUK'efr& `O>Q-e+2.M?&Up&|Eu0.4d4QsuՑHRcWZFtEK,kܜØZ|LA᳃*dt F 5_ީUf}Ϳ'㱯9{mv2&1n29ƒ ~{3 n? _${OrzN- e@< 8_ϔ靀w<,Iz秌 b"\pM}{s;ٙwguA.0*ߔ=3oEY~2H` & "'Ԏwb-{~.>x*o=K]xH@\<57Gd2ZRۿ/hpBݾWt`hΗph=a++ABeU ;Lv]N3Vr9rEhiȗ沸2x [#I)m1c+J\po0e};X@Y:8X =s'.Kmn26BP_ sLIxV(Y^Fz3O#Bpewk4&rG`Idzv#Pվ='\)Hqp,+5ʘ%xW .xg)l~Z;2[/عٕ2|/d$ {zdݫW _oMiy?ҕTnhV{K5_g?E3 woSdC띄Vz~/J0550$H(0$CD&0-im\%~ zF$QfF 7 [&t%]nxi(s䳂ٺZ*HAUv@Av@c](o6m6p &K56:ͅYs.۠V- Z`۠%QmPZS؆^H!xj%=eI|iTA`"ma\{;hz8UV3ݢXX ,s<1Pu$i"\VA"0iQ$A"l6(,ڤ ielڠ[3gȩ{\g+nh0 \(0]gE&Ib1z} ^.*e6 e]ghz_ݜ#]P~Gbk|+Wr"Zg;du1/NW^Crl (&AcaCor廦0o]SI'8/Ѿ`?FRhæesH}K}Kzp׏B:?)Foݼwqxi@FL N˷_C3Ѓ8Ft֭(;a؟vt?C_ӬEБD+fPofl 5C舙1R,4Ģ(;LYJ0 CBV3VTK3V^C31ɩ-kb3ELQq"ǭ Ÿ%Tq+LQ*bf֌Ҍ?^3*Nphؔ U֌I;e #g)Q@1g e.fvI-" ikͨ[Q-ͨ[f-i"]jMeJndbiq >:sy{[]g0HLl݃j< Qhey7I:G5Cf--|vI.?|`=< Ocs|4p$ "S{ _O'ș|7H.dW,~{cƶ}Oq*b V[UW8nUIZ"[ƭ(l>4q TϜ|?\~MP*stqsotL \uwOz[yJ =8}?:K|3op# if6s}|?XI^~2H\QA\e #NIK22w+z?zF| W֣PVحv>uۃOƉGo].y:vk3O\B_ףa *\XojwhZ1Hľz"} +)fC^рdĮGm "On8rY#$ׄh7x{jsЎr@rHhu Ͱ3V5P3Ik|]3%jQVOpTB^\ri5kǐ{s9qY冾@LL_>8ץ xTEbDx  L& ~I |olfo/W *hx%@CY)/Q\1?&TL1X# )&`CxW $4&D<\|oUf_TLqTUExY&L#R˵BCB;!!̺**azcXm'gZir Ǻ\htxǺQz`ǺZֱ$-!h&D=oǚo0ܓWq$ 6X:qEƀTk y1ịԛ Cաd(,1(㔋81c(R4 Ehct7 To^ 12Ii/,_Lл]ިW7gV.FZ*BE:E"L &rpRJKM|*e6"I:!^x>ܦWIYkZGi9\ۤVXp|`RZI6! vP> &v~z U$$@$V d$A! BȅNPΝ:5LR Ѫޯ- a՞ޯmERParʊC?gA!+$#B[ikX0+BV8i=Kb謉8pg 1vքdrdC \փOx#J)d|# 'Xb+#墱F5i)U1U /^Z1=MB☤qjj6>9v5& \Ed'jer$piz`>Uown P}YenUb3Oϲjl{#U+Fi (X!Ւg9=0O{`ƿ=j?o\&nh&è{vⷍ*ԦwfUz'Jxm?I;xO~y⋅oyRl|1GCЀB5tp҃qo Uy9$G B.8 IUk0]%b'o$qiZLO<}l7e@],ܿkWCZ/Gfω-M'򛲁mۿ7A|WVgkg<<\w5Ҏ)VwKQ ' "gNTzjzjJPU=ԸkIY(| '?ksIXSTHh +ײTZq*hEkLF٘Joj`$(ji:*y.E5E R6Y>f| .|qr)QH N!lRp3$ ib!"cgI9  5Q,5E ֱ8`eȌQ(fٻ6$BEEE>$A0O$HeUPҐ2i -3W#)L%If&MTwbE76d[7wN)9nf؏*7g[#<1'wN$ȖLFԈ42 ^[?M+Wvڴ݈bhשݴ%ZW6{>}WI,6j؇WPsD;Dޛd#Y? Q$eB2,yJ)U!U!e;[9_*lB ')ZX$VQJD9f3P,A1)g4-S~a'FWKL]G 'L3U(TF`2OYmIH!M,`x"޺Nθw% ƾv6`]zhOKv 6ZM|g ^qƎHHxNѨj"[,h~VZE>ҷ\/ c(yojR_Q{A;E5qG{L@$yD;I@>A!yv%N.y!A ߲lςsAñ42+C:W*[ -C`="z$e}jO;Gc KL3fxe:/S,s0Ag$mU41:fSܱVunLy {@< ~E3QAҪgVyK^ґ-ZGjk`Hg5mP/WGP /.4ba4Y #yQ&8[lH;齒Ju,,Qk\魶HIe2|dEk0B0V`,XЇl.NRR!JWSdaC9vrД@"QƱ.=)*tMLyOS*KH\:DZ8)tbS ico4`23@Ԑq(H142XSeNjoߞxay]I-Wߞ<y>p 9rRo`\/bׅ"uOLLG7`۞d"ẞh50"u!3#dXBT6-}jȵΈ42OB(<頕U7dGt&Јn1m0I`:FFPPP us{ra9ն(Ruj?61w$ܞoFΡ9Ha5z=@)g(%3{c6F3zzQ2 m˒a2 [49e"ޢTT2IIZ%Jᦎnr0t{I,dNۓӏxzp}/@x$هiq_-yE֧jƯB.1sCm!Ӆ- X҃ggt~0hkX:CQiNp~Zfu@mqUYA >yQܚ$!oL P;ѡ-I2-ҌV9IGYx3jD“Gx:Ƶ+>nn4DBt@@6f;Ҕ8s85B5-缑L0Ef+!^ ;U;"rBSyKv#`:LAK$0 `V_&;28gK5r7F0Ы~ 1Cf4GkK{N`h9 6uFt;MDHvسu,ubGT۠3aa 0g~eJ\R~,24 vCqOCFa՟O6_jUs[rQl9Cd9 QX0c™ 0OP$C]C8V۾j`:n6PpTH-n=z]nJr^L͸liesCL787(cU^ҁ`dtl6<ፌSޠF^eٕU?nCv$@#z@<$96Ji?ۿo_?'62@S:H]',bN=nrWt2翾=ȿύ^TTa,W"ϋ<\^^,o֬l6DI |`mZUm5w8_i>0*fR+F-wAy eܯ4L6u"$pPVµZH=\kOw,RRMZkO3#_\-L7N.7rq.Qý|PX퇍0{_`Q&b-c\\/ş]n϶لQ\𷟱s߱. &@+~Z,WirOT$+M?o}sѕK~!¦qtY5U}`6'?uևbrl㜮o/"?Z> +hFZo}Zߴ7h5lU.Y3y8+ϋwIv;^~(.RdURVLW,P?Nw^Nğ3`8ϻQg2k/{i2&(jwVxK/\q{3[wHnڼ/||}$Mby]-yuʔWj}])=;_ϲ`wd,+eAFˋ+ QXͣ?> vLn4-@8;ko2.}N(,x3{v͢1v[,kpW%ٰu~SLW'sXMɿK݅얠pY&yצ7lr_g58=^"zѡ:O%Y!m1 c`>5yn$\suZ0‹lT>M lTXI(r|b%fкр}rpIabQ@KՐփN-ofz/3tKvW]#C=짗l{IhFTKIULj~BE[N8?Gto2ceJV"11Z=iz\u͝ ARq'n|qL͞4rSA'/mV naVeJ[w`|~ʐ"7oY<-C'8ͼ=AOӓz^M7vz%o\y卿G>VL3Zf9ruSM4~jhTI ^d$~/% b_Oe3Iד\( PhB5\2w `ߗ^u%VV>˦cL־[%"#>,͔:]%D[U~='~&C'>_O^4+[1Xh^=}z= V7AԾӆJ5iNn8JPmV,yOfgPcb~dUdAVU`g* V_*mPaUVUZ+{F`UEl3/6;ƪiғ|ӵ+i)^Uߞn YTFl/ڡfE2Jc`uj; sгڎkQj;3^6Tjoj{ j۴܊U6(/H48QP=kL=d] 2ּ2ƿKM v`4ٗ'Ң=gAǠh7-UZ(sTi_Js4g(މv"D'Em V bYKVº,J#s`VmOw|/'6(H6J+.@U0'#p0S? +#^h7f]zInn;whq-iq3n( :xݙJ;D?,[R-ClYRL5=tˬgWF jQ+ |ڔeO1h!$ιBB]koǕ+a?eMn%@d#`ccQOkÝ(f=g8CIlԒ8VIHLuu^ B[?526;z'p`8>O}}2_-O< {@˰/smuh߽ś{oQc(Ֆab,6e<ԤEaxdh_DZO_y@K M3+5!7 _ ƶK޴4./_QO/nK!鿽FߛC]wݿF6mvꜝ}盄:}~iQoVf=޼Ǔ\wmf|1_R6#OX^fhW|#vXg|B'װ#8VswwQ$6io;/ߝ3AyP_x3v F} =#򒫗aw/V8/2^/^`8ʶHM pF7*^|ryuN ͷ~^+뼷u;z fmrK۳̶+n?ۏo▜čh"۾|Fm"*,*(n7sed><-j'י'kOķ.&G˫*^ME]/_y8x/}*JcSQj|JI[%ULdSVT))Kٚf}$)*w|5?^]n F[Z!>&BYoޫ8mWf^lV8/k:Z"e!5[%6ai'RF⼲:d( 4 b7V'ߗL1iU#4!ŬE(+ŔEɦp`!͏mO=1Xc iHO r)KѮ6Q"ePC8&D"Z29j4E'!SuZ f(\i˖:$DyK)mU8p@<29QSVcmωT#h*xѬɵ@)JA;ku^@RSadZ7[b'RJG_Gqu$~Rb䫈iY1% `+l`5S$(J. Y'I)SmGҦ JEx"KjҼUkQ\цZ&l.r%dBcT`䒳"ac+ ], j)D"; D{$].%QC`I/ޅq*1h4rJ%' [PQBQ8gƩPmgUWHrQZ Tbd 7%NA2&Or%"1P4q+.4hF+ ͢Ӳg*){]m/3*p Lyt9aҴ1k Ki) oPmx_eFdq!{X \c21^bnJW3\LQBUDcjH Ll`Q&T$2z)L d>PATQQ|L8KZ4bl('r(JgR+xM;!eќ5MUV@ rdM͠Rt#ÏފPE8FBZ7i[! P$`B} Φ"Cb86ue!/5rUߴYall+O5 XW$%]*J#:JHjG 9 R7yˈnèPHeQFRTAHȺ9aƳ:H)-и'xfUXڈ$s W 54rJhcR#7 = *&UjJ%=L%d&!TsV'kQ?U[*BoI$>@c@ wRhUYTڠ@8XA¤aG9 Z֪0q -WFυ4ASFk|T@֞ JOQ!,yJ*M\ГZn%E!8Xg7vҘ%Uv  sȎI8,[ L9 T$ 8uRS'4\5.7е(D;]u`B;t^< 6MMz*z.@D P@BXBzVwSW(NϾh͈@Yh hg |VjQo̺^ԼZ,O^Wӫx >r֛ s֧"Ź9ӝ߻|'[O: o:_+<<Ԑzdr<孖f|l9ɊNJE]^n6=wϾ/OCtPwen 7}ߏsԕvP;` e̳wCAX0~ny9ODxL!S{LרiY5<6-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|= atb&LGf?[ɻh8Y#,|[JQPoM3J? Ͻ^7h`ZU].+Qat8 ~CKlvq~tQ89jj؛Ԋ'{՜Cݫֽjǫ W~z>^W4MuZuZuZuZuZuZuZuZuZuZuZuZuZuZuZuZuZuZuZuZ}^5ǃi6Fb!W{ݫ9xvC!1P P P P P P P P P P P P P P P P P P P P P P P P]{easp8X MTs,`G>Q [G-vƼp5卽yh,~'b6 ]1\7tht$:]Z }'hMGz`#IqhOBWD6$NWc;]=뉤 rH E{CW CWP*/;]}t%ɑt{DW 8+7*Qj3+Zy?+/A}{Lm 9^LGgGqsxrYdӜ'::3a:jf}b~>N ϘϤc2jaqF} ؅ k> V9 4vӚ/kr˘_b~9e߼o}1?]5ib{#RKBdobD+e$N˿`9["?z?K=&iݲI#O;@x%<, ;6}lXtmă-vKlʭLLɯUbQtn~2[{EǏn~:|l.M4nfV0ju>W4aͭ{zs>M{r`aM77`a%vG׀T{ahǛ,}@b :LOg>? ]sx$U7&w-;Nثټ'jn7hIUm^.H* W`| ~!')'Fb>i1JL9Bs// 0_Pk0L±l٠6__T7NXkA iLW&(As(Hg8 G^kjE4VVĨss=*G$76mVTg$4p[#!\[Ѧ)?*1Thz'E1\e"&+zE:n*op{h쭠Z-R&'=6 gD78u*>+oU}_vXݯ=S|_]~B?_א3/3csr/ A3۳K)d9*<([u^zimk?a} Ufn:T7Q` Kg}.=߄Þsh۶Ȋ[V_6KQ>퇺`އ!oFpAX%*ө]ti܃EV03XJ*SelI-;yW4(5c5^On A~(wr1g?ɨ>3+KM'}Ԛ QVzeSІ}fϰI< RciXhoT5o~X]O`P?LB'ܘR+'-gߘ [i@r'@~ j~GZyZ*T`^zO8v\kM))/颡Ir7R؛M{mzZKZvsYzכ<{4‹`^?\]nan1}V봥ݤ kg 1!PV[RKsƃE(?tĔeF.jk#=tӗ::JQ$Z5!s&bbIb.*Nῼ3&H e0uڽ=]F;ct4vT>Cа{Y|vzv,6ϱlYS4=0\vq+Y緷ίkm, O uf [O#]d+s]?&߀Y7{=0E].)AS.Pt#BܢHJ&cJ 5hfw7E/a *W^0,m!nE:ZR$f,l0hےl˚e0lבW`]K9 (50*X)re6Y7pa jIe{*1*yZP2Ut@"Ei#"9!`*zMwnIe{K =5p 6W(Ye`F"GDh.16@^ּ' 9Xv:o`-v]Կ嬰KA8r1MkI#ML91CϨzwWo5 3{:L xYrU͍gH X9ON>y/*rAJHv>HYD{<ԳrU#`cDi9`'0);:0F+42PԼ'NjV zi3F')IBpS^Z} @=VBvxԶ)|HaW`p)5 |Kv-k_-p]z1>vj wI1Ku<3T S Kt@/ktϖ6LĩKОNH"H'ؖ< ֘ 8%\Լ'Ұ!]z9bL맬rХbSI 9hs!RT:拔|= zEp՚?(.@Fsؤ,0V F$Azk~g۲=5b@S:%tc*1 ..'`~(5XA(zFS zw`fX=_GQntOcGU!|oz"Ttc#:55{1As 5^9-Q]ҥQԼ'3\T9Gl]"sF6.G&Z|Ē]yOՀ 12r+5 ^?l-Z:q#>FZ Ǖ]Lc',(pӜ]&GD ce ϼB^5Cv ̄IUEa&292MQlEv(PT̫m:Ǭ0NW ;`k~`NG;hPeNY.RλG>_+Y=aLD1Hbr DI0 gG4 4jڽm]Ikc|(lJ )mS uSÊxtG舒c7 z,:.s1+ɓ)JɆ_~ Xuu9xgUE~cÖ;q>xsRy ?OGx%51jUO. !gTiL) laN`qQ o`]-d9EZXxL%rfv qA.EB<K;<{ `0?XU_1QJ9sj=A6qP*8R [ұlY݃1D/|"Uwd:'G%"(].X#-`6)8mn *׍5--=Ba,FRbG՝4OdOЅQpv[>؈c|v'# )Lw4@UIcBЁ1ZϼF3/jr3ݴ=rK~{''>sΩW@ U|^}|oqggmHpfH^v ;vMvV,SH3^_5)>$rɲcUGUu=/iPM^L 'ahTq? aDaD_r_ZaRAHkrOȌD ;.K/dK3lL[,x} ej#8f=FRgDGJLo `<4qPG1 #Dv:~ulg3{Eg3-#8E8E,>Z!$GfC+]=ռ<; +u8ZVhx䳘;BHsj!?Gp8W1\h`q<ɘģK:A9U1b^UXى<`dްxJ6 S`9PEOU-\r|\ggQ:Ɯo?`) &w5|RI~;c/'EM178f|wAitw~::a1UEoPQUǟoCUѳ{cb#*z/$0FEa|]ʹN}řO+Bddb'/ 5U{,b~ɂ^ ^S^āLIЯD $J2#.{i Ҡ/kD~ss<ԮXfv7,3*o`qKyjyPnVK60[{zӌo^9)B( )*T[hͧ (u&BP${PgWߠ=\+ܝQ{Ual $HE|^ * nkNeYl#"?2Cƍ.cWSG0r7R4[gPy#8flaY~5o=GU:=]]'i}N-?c.~ID-#-rug6ߑ?*ʕ]kU`_xǷV*yqJ<.ui[qllU7?jY*[}]CKX FsU=ROTA? -o؟HTKh6TꅝX5_+ڴ\韌_?~;"7r9 @Ǿ0~N.wx>Wo-a ;?5?i.rHw/ "TVf|Q]Y}oAq~7^)v|Uz+[7H{&J*< ܁/n }ïZzUEoo"flWAR_h8כ`%(ޏ D,Ǐ$7%`'Џ0ݯ$(@lL>]9hU5 fδhۿߙi|2j{b "7z_qNbZ5FC,@!a "Q6i+*.g]&L~}r{o^7^=-.Vy]iNQq xL^{ L?HM>34)߄$5Rd鑭kwRk0r%*t'QT!s~yx.sDkD1ZɘB! wEg=ߣ%ZC$JhM$=Xf>**DحB vxO }R,CwBB}l?Ud',ȭP'U$t( eYe\x#;Ab0DtL"TŽ#޹4Q3 *)InGT`oKx,O lQD ~n.# 0p"E1LőO5lci( Øf鞧@jڭ3~+>0W|H:wvD2zHD³.1#%$!u0MYui/FUASH1; [f #ʸ;uO{w_8>v~_;L: x0U9e"wk%J&cЕ'"b2ZoXœL2ڃG}7q]!c| Mt(*;(ě5a2&Y1u2&2ej:M6d!JZ!|*&EKnQc%Z1DCAq'Q~_nT`S,Gӡ^x_A7|]8)x_1DH8OCt ?{i9!JЉ?7nQ686}܇: |7&cOHPk}{eZC;&c ' { zw.p}d9bD:N/UXC7ιƮšIV!cCe> 1DKZ%ZC;&cF.s5+6#8b:tN/m4 TCQ\-%R'Rg,_g|z'K V(`Z1I H=r|Ey9{9{y9,R]ptiYUd*(%Fkع1He' gv#kU>zE)ܹ) #+w=m #Rg=2Q{ns<2"KQP+\C;>L \3 ɭuFip F\ČĂDZ w%AKnbAX 0V5HA%Ə$fQk@@I j-O(RDBD(fO>3uyJJLK&m() nt Ya[Gh0xޒ1RTݳA^cx9_QKmP?v ;aFnP;Ert"li&M#*<KK蘴,?.=`*sOW&c E&⡼ZzTBS0ab3A'I•omHcuƁ_E*nv8?ݟBg6XAݙJmvK|O5@6*!H釺r;VQN vY9{+~ۑ*Xv)a*Gh͘{lƁ_fқ*Sp^V$iFA q/>kn|?X19_?-Eylz8Azia}IFB*$ Ls0E¢M@"_'E 0vdLb縉vYG9q4DsU02J0Hp,Zr4 |nU\r,9}%g.Hd$meMFH7:N ^E޻ CqKmYOw=1i\I۞Mww洎w {TϖC[_PC;2Lga,ptb EK:xy(%E]W !R$b(Y{ǁ>-'20-.L ^,8tX 42P*}:ϴi'&:@_n{μ]+2hpjm"b_iJ}PL*N9^rh |J&cRvA%Bwi+Ґ3X=ڨSFJeuz ^5tA7`*ɘdxPo.y.˒P:4iBC؂O43 Aeq 9awk1nU0);i$&\QW1E!Ĉ:dE0xZ顀_LXa Huq4v~ R~2rgwaYÕ ͟|[rf'B-f;f;Ɛ},Aً5/7*/֫MYE1]wmVe%WOUVU+e`Ֆc'c~'^go*6Txj *}04彸z6pzb8xiM?%xXK2)X{x#^X>,,)>{?4U'cufibus*ʵ*(`S .4#>KxUNU` ˞zLgy^Abi,L<.J\8K/mq܄v o;cF2bɉYokV^`?l6vQSZ`Yq\bwrlZNlxv,~~gzz3V yaRP6d[䰠%jW{X |'罐΅Z=uѹ=2 մ{A()rA],r`"-oqVߛE~= SqGr=/ey81Jy@40B*L%IɄj r1l4WD~"u`t})nXE?WAg0DyvOk|tY 0HtgK Te~CMAfT`j:cQWoF%? #u1\2`+k:_ p7ђX*Y?3܂]ys=i äWNL{֑#oY}<, v,2$,`$Ʋ'{o}u:hM0XEVuWWUa&eԵ~]Hks'.KJleXiI{/!<L_MKQ] 66Tn[fUϺ&+̷XԼ7hmE2X'FReS2O3g<(*/PiPHRhuIxjHlGXJ2eS`e3ЀBDI2 I̜'ɄQXЎ{: dF g ⊯lNBm<|KHQۣQZg_u°mGoJݦ* *ۥ*8i>\ylV!vÍ$$wFR$ 'kzfذrx[ ]NY&`diiIfO Rz8Kѹ` yUo]M7XyNbcLjrLPy<]a3-MoEVja FਮlxKkjK?7ux\̃2WֳN3-t \\x[w)]W{x #6"ઈVG('d$zN. 0Q,yM|a[FY:&x cޫ(I)|ޑzn@YY}8O-Wq61,f*26`|6EP -}n?paqKlks[Vd/ +Jv>MunYooV 7?)!6?Cu_<~mz޼AcA߆*fS?VL޲7kNo1ďcE'/w}?qӟ֣׿ckAm~͗?ǖOi1 n?q;6}AǞLo_m:}~78:{Zl_um[+K$_2$ZU+C5XP'q_n԰&[珪`rTUKyUS.LIGMUn( T qp͗=ޙ?KxE1H=6pn5ƶRZo2,jl?3wG>dd&mА'3c6@ܚQ-]HH mCfjCfxK[Fs$}& Ntm3(SWmiߕOCg@NFB"HRpʴqa[EɣWvxlh6ō v=G /E!%DFwzJJ3l9rSnS7Mӵ`eOF~ lШ9j,!jӞAUfTFv,8 r\~%ϋo2ތ;}h+Xw⾋c~rקK$ݥb`bhz2^uФg/&#YbiNqFSb-*8.ǿ`=-:Ɏ}|)iojl聆iq"NvK7pbP3YSED7fEW!]Wbiʟh>6HXlG)|DM3g{;&-r!%y_"'xI9e4 "=MHNhNtJy{Mc#$7aAd+B'J42x_b儯8k9 nhy^FbF+@QUS>N,mS}o?fP@ 0-=:BZ 7zl܍,Rffx+}ײCîC\Ug%t_5k06ΚuݱѮR"R?6=cN866 m.u_aP h{?psB2=GJG~-V]E0Lk1}VK#kR*-[>DF1^酹/waZ$QfZYe}Pm+&5J5ZD,?^ǥR3^ I֦Jq|{域@ShuҕTovVR`FNQnrs~:708<X+`zS߹S6{~ 8Zz[)x5+g(ӕImQ X6#*_N VR(0a3ӹV2` \'5ݷг ag$O Xɜ1MNݪO$ԫ}0TCת7}`P,ra0nŅ>3Qӓ=лի=њj<,;$D1I+1%$MrMxϹH;?D|?zez?yf`Ƿi˸~&Qq?[+Goe<ɬ#/OGEt2Ѹ Et<9d-"H%vOVO2s8$O] `!sPΛ:5ߌfQtL P?87'_ßF̞O3L1'. >2gnX\pH!k /3xbw4O]%ϗ\֟_UUwv85ŗAqy$.4X^,I~DSit+N&kfγ$t .O$V+~gnYb<ᥜ6(H͍k@WhŴMiaK5nϠ|@K\8mDuksmь'"Nsj1 ^U?YFe 1LM9P fqPg=~ħ&i&hzH'2.p"4#-{yF Ek`(09po,ofSzxF$*5C`cqA;J); 6 yT)wćgdy0t9C(A-T 0D) x'S˙bA;ry%&tdwur?B*#H$1t$ɔ#.'f=y44:NQFqlH'0-q%H9$lr崏WoL*Dy.w"ϱz[*.[֤9+ow%MTA7e"|xg&/>YO]%$$QT =)$D.t7rZ eU()Hc3vu,*-pv憑&KT┄Fgn<,rNϴot4hf {[T󈲠-bC1 k# cVY8Of: Ҧ{q?_.!rὪFNdCU56aVז ,'&4(#MFY= j )wSm4yYz_Rd]oIxh#WּpMCBh2TJFXsYv7Q2'K _x 3RS"i4+DI +X;7$嚒g&Pb%˪֖A Ĥ^鄄r ,i2}d@Xk+b@[R +0uڑ!eIT7jUX8Rk#`´f`0ZÅGs^~#pJ^6&S \Gŭ^>>@ xB TvqlM ,>A]H;Q2E) %TMI|6tfi䖷5y(-niMFY+N2;!i#JQ*Jj/$C*kt^mydɅ6?Hk ,REcks/Gf,A6fT\:Ye|?lRfvdP?[j=ZY8q`JƸ-9s6~f[l`{?#p_FN6 0`l63HTuI3j@19H s% Y ݄cR*k <R{';9ٖ:@^fW4ĮK#tso' Tq0]Iư 3&qG%:Aȹ64|ZQ/˂BCL2'q'̗%̞ts!ً2K+A)ikHMnP(±.Q$k{X.urˢFJOcf~.SHyGUp9W0WYn,!V^}xY8 ":Jp lgU .GQlz6Òcfx>#5_SaGf$Hk(!U%yl,%J%e  ?j$@eD"AoW kݚc 8{"oi-7Y~T}**7~[`)֚&1M%WF\JQ3@>2*lGf$6bØp8TA Eϭslv4nf!dNPmFf+씺pݚz7Z42FxdNϮ3tʳsR+:t @\fȍbgҠj}Z+?By+l2#pxю :-X32jIB aJ7,VǶ<2 G$r0`ʹY r(o$%1LgGQR~F$<2 1_cOf9%Hk߅Dcz Y.}񔗟S Zxx&nF2xdӁTS EkGL2m`o|VJPBp;?<8 =V DY"y"PSkFW,D99RdߘKjQ$@m>JT,s6ëa3K!AYRu+'K39Uo;fOp[TkY>l_> 0Ztgl6rTN_+-%U4u8YXD.U]PDQblzm2,gAIm$bm"lޜ{<2 MG.֟jo9, m1V RgciSZ|ca y4 d46u顳}Akly!Qy)8.{$sUCO]i߇"?TM:yg>"5Ks V0>J];|eVϛŰ Or3Wpk[dcXHU߰_ënuTy)tFC76~R.Lf$"|`CxDǑjB&/zhlYAr Zof.EQăY\S^$i)Զ:QguyNU$^(!EO+mԕ,?.@[jc[o-)sr~1|nw^~?La9(h3H&gn9;z\b6yZ9:nsd4I}<a2dFy;* {6n+~yhنw# n'r+  9%-Au8+<ޢ] =}!ƺ@iL3^ЩI8%^Ωwpja1FyAP%c;{ݡ{pxLa(E~Ϥs5yD|OSs jQ$/G?><\%ތWcOvn'?խaFx[_ݺ*HdCdE *8pC%y0%OM_;.n<쩎?5~w~xw0Ŧas-wrM(# 8H1-(?ǻd?X*黿w۝q OwiT^mw$M}H$ݎ\9Ī Eヾ[ '~{~ =Aհ@Ř\ &Wxx-5r0#4+pPϚ2D;[-b6 piLpZa-«7ort-h+H赣Gv51҆q GYEuwzE QCZ<)^Z-嬜FQ1ϝdP{i sDF@œo_] ~8$xOׄk}+rC QW/WSDti#X8ՖYuMCC(C W#ԻC QiLQStPܹHi;1:D[T &KY='Rӊ+EtC@1CmfQx؅ =`4w`}M[/N-e)|1*baϷ:By+XQ+f r%JUBX<7~fA^>-^ES)W zS2ս~5W~:S5Q1}>ySǧ C vT޸? kOR;vK^Gw3D~{&p賓exl\~O QpaQN)oZH:>/䝋`"d)" Ş"\9Lq*d9?Fi؆VG~QYJ+>{ŭQbtVl7) J9[fHtZ;[ q C^9lxnSlAmAT^Z(kw`]~癒ϖ(rC>c.a.Oc-hҠ`iޔBF֤>|xQ p,-C<[#V{t{* gUٙ8'斱E!Úni*e: wԴMALWK0bZ9v,̈d 8d#^)>6ь쁀ʥOM ĄތRRp)ĥ.k #MЖH q /}H%aF<@QI&ۑ3S\lrQ]{ 8!+s Ym FU;9u2|jОxՓӘ}X+ > x-Ә@k]cjqN48Dƫ JZ`c1#yo P5 3"9+O0Cm=c;"1K!ubiC f`"I2ʏ?Ҟx(p`(rT<˽ӷy\WXʜ)jĹjFd5j)D`T#ĸk ,C`b`eJT(k\^tdZ#]]Pڐau,k{ [E  0[2MBC]t L#9-!YxM{⺼<)v$Җ 50h[`ݬeNza^.E ޘO+h~3|dz7~%l9+u:)Ε pƔzty߇S/um2wV4 lrVi.(kM*gYŅz^oLģ]ԛ0VrC?xk?j Ù?_6p_tqOےg/˞uIO?Y?Z*RVmėeNP~\?G꨾hZ(V@^s"%Ie-v\zE.o"Qt;v) u*]OB'Q&Z9&R_MrU,n_x~23\?"hi\C"][3:6S|+.!u~0C$~vػ6r~^ymASbk,9s);,[%c ǖ#Fp~$ϝlQT)+]tpWs~ۢsj\,6lq <'k+8L&>k/򨒖xcmQscυn9pT -!'@  2*=zg#S0˧o Vp\pi f9|@"'ᮋ~^P쿥M˧?}ϲ5xO8\+E&$Sw;-+g8q>-CQ$T౜7G8D}\ǓNIvpL!yaL@וа/u%̟!B 7Ay/+yH'ՄRmRp^}範m@/x7XTpMz5SVD2Vk'h%\E<65MЎ{HOmZs%s+&rc kWLbGV[^.Ċrߣ59uR>(;mgLג)nx~_\6qF~oRY{a%Zbd2+>Ke8[N>P3OrL?,_>#|O ZuѶ<Çը}N ? Ϧ? 5]v sG~(}7wˏ~GKo Vvls*t>tt6߮xGil魜=qR?ӟGGgPA_]ތ_NSo^e4hA5lHEa9}2+5=]y6otbY<}a`s5w(AW'\2a074ʪ lᾜ}hTH *$}&'LYK.;E#gP>h >   :DU={1zǦ`/ *9j.0{0JZIpťgq&2`a&WA^u[!{pMSW8d-(YMd]{@IIR|,bJqX=j.~i [_;]' &˷@;x y+qGmHX4"a)x<|?ht-ғ\p>wD`\Q"*BNfzPoQD*m=N`io"7hZ6ݸ_|+cr9#xf6Y-阑Y*B`ߺ2 W[td7ZX=vLI1UH0{tBIm`9O;~LQ]Ƕݦї~Gײ2*)G_k:Oĸ[{Jn// toPr~6 ;vylwDHX&\; Y3 X5%LKA2i)H쑑#̄ ɄhsYB3:02%NFp)$J3|el\r' 5)eAD"O4.28rp.?o)y,_a׎@l6;iDE*Z޶6ycJ(> 8^;hhC. %M_v4 eF`Z!@!3̃/v<KowPV19} tRrXИF2G y[*3q[F9x}4ʀQ#cKVRڈ`juyr*%8h.y9xknQGƂWmp&)-KF>`qB:0]F.hIG9j/ȺŎn5N]?e ?t^=P\0&YctE-c'Gepxmz| Ӑ|#C`Jyn,TG?~hQ 7 {v9dڿSMZ+nG0 ZKD2$C}Gepx .2w\nf l*dz7jUQl(Rd ϡ&ΐ69Aj  ZvwdRi i9amT )1$^A{Wg')IB`n@S^ZvQ,6,l p6c9(I vϨ*6K0&.@$05 9a3e<*Ӄso[oŧ%c-[r/k4.&3=TM筝L)+tq9%L" ܚ)JOuQembִQ H'DSK"ۤL@Dʧr(vGepLiSʹI2P.B(6+THu1=+0nl2.W*+QN428.d0FglP-zyJ\ItH/'5w->=ܵ" (v9g sv) 3^ќzkY5kS[u!!k~In:ݑْv T~~A{Uo; VSLp{:%,OAn1Ff$ML5w. 휯^Bo+wϹޡ̜8T. p\ƢKv8(ˡ,7|B'R5j*E^Puqc!;x#"TV[RM4*.0nQt/&Krig{i/<|R_`*TI)E"pnَIlȤV@E9LϚū8mG$,'__)x'h~7eejVwfц=Zcmɕd;Iqe˺E%qbuwst*j0䏇7@VMr34`fM66dŸlWp 3|1`2'@Rb,GrZЎ{@^uG@sFZit">Ƕ}1|I|+2ifͽֱ I!pQclF z8ߧ|&jFh[^ؒmpׁ7ǷwCusc>,#Oh2`//5~{[Mr5eG36oq2HL+wIO87y6<pY+4 xB_yă^<*Eg Hɟbud팵a/`fr3^Եbe4مY^^|pGUm骦>_a|9M L03ݐfL\a~ye0IӏS(# oL2Q죺|P"uQkew'85eђFs0ߝm}sx^oGm:Cka?^oOGYoO[>z~Zv. $ iMa4~:$g9 H*?<߯m>6 tfZaTl8߀)%k|1ЖɎN-흵ҳ>E;΄r0^^4b ͜,듒x8'gvz6Eg dVt(^6jWoyiV/*/|d:CKހuU_4| 5来J R{H+^|= $/du%_"xk/H_ xE/8| R_@B V%_"x8 %K1TPB1?jj/H_ xpHm^= $a`2sv/P<:x.VA0!n9Ȋ@ڄ4>_"x;6$U%|cY&z 5k_B >-| X׺s~0 _ر}ۅW-텮,Wp<-Js1[G|5N!0;"@im@շ[ {+"[t3~5mHkMk&r9L8ϩSQU@$/lS^Ak()mk`ruQYy'Um;p3r-%wj.Rp@F 0LEZ_j#Brwɗ-]RJxah_'Ί ?zo2xp?I$,x=%}Q:m4#!9ɕI15א`wS~el+C}J z`Gh%щAmM2:oNFays~6xUe?L\vyrSOPC`KqXzC*гDV@\%.{HIys,kS4ævwC#a\Y094j/X` *P!Do%"_"x-@𜓀sJV`,Qm'X"xM,L@cb֬V90_mUҘj73+wZ'e04ٜi=vDH>Z{*IEms*g(0<'عDB Uy `, W1i a|)>IM4Z {h78QHL|)CCFKMJȹaw=|9^{/\ٕ8D{PS TBE%3ோRYt2j׌fE%wa/p8x ͇'MꜧOB +#5dDHHSk|1#LB79^/P<2xCE"91+p,yfaRa< DB LV_0P"{6 8 z|V \VNC4F X@B B΀e$h7* \@(Oz̾DB ~eDq2+0Z6%bD/Pէ:G-⎀GqR ^ ,P6)K|*"1:`-E|DTDK$TO~319͎7$V}HTe 5:;(20쁤^V"x&qm lSQHAǫ+P4MazLqL Y]ԐSij"nzw'ˉ/H J͉F; xdr4 2?>Dj:k8./5_k| t8Aޛ}rsM,jImJ`&D"HgL*~aci?"[?|״Bij8nA ԏk4ۙ³4WU۷ G s.#a_~Zc>l%Ijl^9 y?}͂BG$P"V>ָs }CTj% XSːm_\RX̋>YzPEq/c͓ah?*l%%x@ߎ0@VM[YS <ϤْR~1{x߇vz=WE9/[Vb\՗4~ýkiMó_7ma烟[[p9糗[mK-=JޮχVRge'&Ҭdvetu„lgx}6蚟b-R!#q2 񘻠 6KsQq ygLXav&ڷSw Π}<_F[[ņyJ7Elo]7ŇLW oC%7|ǫ<ަofm v] qκep,z,ڞz5nnA^mbi||spE*ѵb%^g]nL%n|?3^-_Mgc,f,FE Mj: d_vOxkc;Z6L|y@Q^츉b2y=hT;ʘNWjAf9:KY8rv~vF:97S6M, t_6_=Ͻd zՐVzl>'Dzn{r[!x\\nv'ctΙWaSHVG"e򱄈dmCD=ɧ/'z=LlG5K\Kn?@B;UR#!^kj0eE*Ǯ-oHD}r?/yiOQ>ɓkKC39oww/MCb\9_\Q^ոUN~M:+zR' \ .x;XIYurƳR/ 1ܓP. p@VdIpťgAna?L'.Hj4|cv4ڛwB/Bsªqo& 뙱1`&\pMD&$S#FkuNpݛ{ow*{??E9Cyus;~*d0MzJ>%T*}kv/w|~ѝrɍ X0ږޥO,u;qԭ9YUqhIquzg W`z^UJJj5(#dR)i~֟nxe{}bnQnPY;|l›D k-zH^/|*S n{ȴ~yOG /yF={opyg!gKWw [#`OU#?/ItdqB)Ơqvqi#f ;ze)9dr9r '۰ҳ'C\O᝿|ILHX|=CQ$Toшpɋ6mI2|,(aXDJg(;*cJN8.9+ҝ0}(U݊+-8%-ց320h8FZhS'7DH8Ĝ'䐊DȊĸ7UL9<.a4"+V I, mL& <(>I TQgZI_4m3Q݇fz;O+z$SDBJ$PYT{x|N"v1҃P׾]V_Ž%754'򭇺2Hô[kipm7u Mno3WtًCuN`<->7ϋk-ۅp))xNs+#څh PZBh5G8æzX.lk#%Z7phnL7f"l'=<\")+\VMћxb҇UL55E0*%(G8Şd9Իll*|=^90q6wJC3 vLje>DE/G/+]4sf,vY mѫ7V'VPщgnZDdSoP/2}?&I {}qYFϻ7̹˝./; 7[/n{w+:?w,ӑP=){ΰ=LΆ_jn3GR$,PL珝A=Ξ/ŤίqdwFnx VuD褠עz+O6{?+13_yI.ȼ EiKJ橒kT/0oLCޖ*%UFR, ig\Knzˍr:@=|d}n7+\NY#`/ssQy׹7FVSszCnQ 9mȅIuݣ%twU쯦J*l%)!y %E~sdG;qh,>mpC%,F qrQtcM'z bmYMPV\cU,/Kk1=mA) ĕ)b(7wodEnThRu&Zf'.\I{o'" ne^XnP!'RPƘ7("?<@Kg!MG}d(w%) dkMr[HP$X8rc5=L+^nCJgM:-BO,$ʷ{ :(:8$pv%jyg'ہ?,5hooF&("8J#(A E^Dbq?m\sWޝ &,w]9uً\{]'OURbX\>1ֿF)+" 5 䢶dZDDGΤ,RaƄʑDm&%q1(gDa1ф?ёkNW|tFw{O.q.E=Im 01D" T[wE?@ޟ_֫7D$ˑ"$:᭶&H\Ҟ-QRx̂$m"upAR {+Zv8,KkDbx`(6$Xڃq<݆;yGyHxAp%*0TṵV^t PFɕ0:z&|BI ,pn ̙CN_sX %[lV 16b51#a璯V/jZ _v$F2qͩ:?ɳ|;`] p,1WU`us=re`ExEUi~#>ov^6^ߕ[M'b1Jibb 悷y$4J bY#TvEG34r F:"A^Q$wAo!Zꉏ܁H+ӚDpJaV3v{uWjpֆvu+%]Tln8nr/Q~SE:o/RW=pNjBHJZ݊>'ƠSV dm-E׮8zAkYYο}}} &_G_8=i80G(fQF;Ƀ&hk(7|ED6²+{ET;- ƔpE"F[ J/X,Rtc8(8#&IEXPH.8yQŢ,YE}XKвhZѴiԴְ#(ŞR׫yrVeޒ7SAGS.|:57r4%2!eLXό'!\&0x O'O9 TM lIig_L8?; u&&<g62ӇY/`0Sj0WK)5Gξ`V;,5H$9&}qu[\W-nqup”?=,8ɷ5.cbW:&Bm\oPl2J=UQ+GS Ͽ}~op:հ*:R_9ɕI1a@ƆrOt]8%t[D5 5v!9D| Wwٶ nHHq7{vw2QG!W'( A!3(d̠2BfP A!3(SŔBfP v2BfP >Pl!3(:VtM+V42BfP ^uN x娺|%_U uAzסq^prB<.hMĒ\ygLXw`thjgq'"zaק /*CkhzwNynjVBK-ÞZg=?5yӡ畖a6ywwEۧ9wN>kYw`d\cʬs}Li'q{9e5_BALɣ\0\!& mYIi" 0nlf}7͈+:!@!^sMG%\S:D PūBfqN6"7ll'kun xα)9r!ӽ.ipYzwW&_@q|INZos.ޑgt6RvY7{L l͝fA{QU oЉoU86zk\?k0t:(rX ļieZ/PD_1{0J8 ,y%ZD뿚Uu.1[U?l"g.( G=Y^L_1_+532_r)3ZZ̾`MZ?3ٯf\B!Zb;I`#4ylg+r_`PA}rternl72 V/;,SN9bs/]!{8O nÝHka '=6 gпٻ$rwcpp M%Ţc5,[MR!%D͈3 dw ΩA gUWqz_`͒:2A0ɵ r 3f4j|sWTk%iCE795)0\l|»Np4lzM6,] rL$[Jpbizޱak9:.%g#G>bB#ozQH8~$at'E{aV0#(6cFLԉDc2kxG0$o5q/E'J`$[d!QuTr&Rgo<{m3nJ_ctxu~F;9g-vtЦܶ n̎r/kyfu>(B.kMtT_WCd9tLvׅ$õyDpQJ%iITOX׿o<Nz낢\:z%ysjj>@3p9x{XW;w{O6u$Z ySL1ú#.H면V yS.¶1"!GHhȑDy$hYx搣JO O3ݘ BBՙ "B(1_o[x:EQbcWi7ڼ9F5콤`\)P1Νixgt b.(HzΉT#r"}Cz;_=<Ɯ(^Ā4az%1 ͖kO HdbH x&))/QWi7yLi!%+GJa+˯]]tAl vufY=[rPW/F]i&JVAWiZE]l%!׮,)ճQW姧''WW0AO6],V]=lT=z,9O"|ӷy6?dNOky@nZׂɪމpG2p rʊ耹9 fTxGQ\#㝡`&-TX"!g#0Ꞧ;oj>/{$ 3@`hf)1J[ܷ!)fV`ǂ}42XruRPcO |&Ir5Q.j 8ٔ<`R⸘=&F1ylƄb"`bT6rDŽ6zùQR' >hF"Q@ӈ5 `zHvұXU9MF-a#厘ȵc>2"ND+υS`+8Lc8u OϑFuRi B2@|7b0`{)( AFiATD\FK*f:m#8<5vLm\iޮ~ mĸ$ib62Lk$noS,rh\nKn9~4ގ鏵vؾ?`yփ6՛'A(Y(A&aj`֫ LI=Ȥv a`&ّB%.\Ȗ3!-wMZn/>h:-wN$H{f M޴H}s{ff0 61EEpu+t_US|,iK^ŠT:vU6wJ׉stj螾VUFkX{b8-zz.V҇bGkO'lL: nŐ]wn}p r0\z0`\ |5JGvNnqtp_?M{9|6M?Lg_ϟ@e maQ-]Tkwpaur\p:1E}`vc͖w>l ' ydӫlJꮆuuhUv-w(M^{Ps}6-__~^kS^g TN]*ܶNb@9uƤR!fH Y@SJxN@UE=tZ)SYG7cPbĊvU=3O2Ɇ i#> |RR  uɣ,xevUhR^{,k-eh\o3lG^^[G&)PʍʌdtdXDcZ)"V¿D h$,TBk2K[.E|97W8܁K8ʫ­]V/` : _2;bd"gۯ~-fb=8()%"wn c6M;ssXCö2(,؞eZ30Ok1hFs+%jŷl6oHo 5GlݹG*ˋjԻ,e?̄KS{ѥ%n|?%jIZљ%.Ee~P0 ?7GXܧY1Î 4R^QV{Eב+#b6;Joi<;Y>ȃCH'.vcC?zsMˉ@@'RS| zY/ tѹtzp]+Idz< 2BS!3NJ^Wr4f C먳̧X6de08y*4BkM̨3hQ9b0XJBԀDL,^cz1 ZoDpp!3L).S]&a~?Gg[Ƣc\gb78 t,: `ʠyTd72VK;]R IUp;oB!gh՞2# bcVF?rYq=/L@Ԡ&=!Ɂ»Nazs΅@U0dV0Eb )%h:x5+zp0Vkfvb).8r\![Š\8L0Vq%ZpE>Ӓ)E 0y/3і;XH|ċ>b;qϞgueruZŌԙ4Vx(&Hvxr:v\-eGzǺhOʙ1!Vf !i"}$ sI@͐G,JyyB:=V)n=N;[B7zf!% 3Bs4v(QTֲ[R|8SHb_:-CE'd[(X bm b&yP>\fռ訢6R(::M3˼4LIj\rtɭ! b <* 8CvY`i >"UaaE)7~yz<#629VE(֢Y-b\H%bdF/cL)D֞qt4cY}J{ s[X~[­ŀL&/ǯTPE#&hfR:\P2<@mn ޯZ8,.Pq@ay: KٛNiJ!q},'i }^L_' øsmyZ9 onr)M}],8ma3+LC5"C4w"R %QgŒȇ@=Vk$ZsU<'&c`͇٧5zvW{"y^7…к`n7;Um*/l:{]u)eV,Y /)&hKsv6G  U`ha)1H-h{Crq?AǛFo+ ?WboX,Q` e +gt0%=.1":‘B0ǕLRc`BA,w21K"$udk'&ETj06x'ۜWK ({c;˼hZ>(Cͱփ&1M?I_r:4G,"_Ac!I& =By٥:ʅNEK~0sFje$G#Ai%5g #.6hE NLnX:ܦE:/xqd;Zp:ZI|n?| >z p~gu]ko3?)1kg &+bTGgFRo(y4#(t0FRct͋dD!e!x0k5f 2b=6)5[!-_6rZ&.^S}ْe 쎮]}j.JZ QW$o<̒$2:H#JX̪oE5Y-L2Z)e`x;Vؠ1ZleF͝QFGR*:d2Kp6rvKZur[|Y mz6,PR"XD(%r5JP>D%%}+P i"aTW,[Ƣc\ c`\b҅VpbƳ2Hg>[ -QndtHͬCBL\(wp)BSϘ8P=d:FY^1Tk%i{' ݔn)X=o`3b`8vE/h0v Cj0X~:AФUSbWkna42ڐ.ZO9n4_a_̞@5Yá6I/P{0#xULKgN baV5Z)%2upL*Q0]vci=1Ҙɣ{oG12C1Pp.`9V&.;kP `BK 1(QR5E4zIްH +7besVAQƽuH ÌМ="Y/kv0G8,6~St^u>V -E9z8)5P2K&' N2/ (Se.wjݍ\]mqn'e~$0i<0 u @ϧ鲗ӞgDž·"6>l=X<hOdwo~C*b&_P<2#%LH$,:!PieLSq!}:oo&H E.(qCvY )Uaa);w.rvPɼA쟂󐟙&W^(֢\.b\:>jK ^Ȍ ^`A Ɯ2r슗R2>%[eˤFoSWdAaߝF&l 'H kAc}9ACž"2ɘV!!: $H[XA`Ir,aK9dlñl/ivLI봄A(2^ġnǒn= {\,0L0 n^rÜ7 [XcI"!tVp8LH# 8BqV8uM oK2)0Ι:{'~qz1! ۷aV| pZZc̴ Hk#^Ok|R꣦{6냨Wv>cw)@hb/)*_j+ DNDNRy.&k4j?:ڏ\)OVӨdHb{+֫.=SH>aд~?oM͜ΗƟGg Kdٜu L9l f#*7 y*HrX*jUiffh>"J0q'g:C"`΢||nU򯵞BS 6FFa Ai6Do#D.A<Xq^ȰIQT!S(a9{Ч0ELG;klRF`*12B쐇K7qr  #|[:n?OC*Qo" z[SCCwW}J>Wwfr;Nԫӻ?c -2_.|eQ4_V9%wܦr;ZvɮZ2":cW@jR2կЮ&:!u&23*YWP1=wu<zuǍ|)!0G5+fApWPu` l 7b~d+`x`Jc /Z.s6`o+ĭWye4w ipmk]clͱ8pί?w dv&j>w T!+kP`eQu|Tp4^<}D`'vwiA<҂wݱns#Qlx }\V/V,Uhb(?)ѽyh)e_DI_Hd%m%jckUsyu+'k+LBSF2+Jbۉ.'t2 gLM'Ըcn4Prtm7w50)!^ BP=s6%xYZm`jX+&LŘ"+yV?eP7%?dgu(KP"g-A|^A0:fzV /=^IF{GMk~XX&O0 X7]H][|^im0;{mg3΅qک۽p[Ư&(C0!B*Caj3jX$"﵌FMFSJwG׾Ɔ(_cCP/|9m G=o[Tk6Q|嫖xpfx}x";Vz@ou?mP"jxT~8!,ȍ$SHNa939E4VΒ]5*-‚mM_O'keѶB,Q}>燔 t2R% ,Z}}O.Fsڞd޲ɶ\?HvZl|} )Q[M*Ffd j0D.,HWOz(=)žNbn\l[݋Ph3P-:TȀ'ZKR;祖N0+D{ۧ6x.Dʡ6DY^Z3 r[ B#&ҘfCXO$cZadPg($^2 YobA% 9Ij$f.e^琱-"iEҊSҮZŽ /`Ghk׫ CwKz.4ߜR?soo$WŘVDq¦HϟO&پqul bxB4a~vO~Ӗ)odi"pWgd9ϫnqoW?ܛON}!xINjA]-V3c"ϲRa`yK-~<`W"s[^94ҢOKiC}VGқL'9\$pN9DŽRn&x92J#"( -J)~H ֎0^b,6_a%ۥoa0$S8ͲKU2?APo} E5Oqva<980K1n2 #_w(h|] >ۏ"OTs`fW@g'45n ;# z9Tާ{[E`1F: lp6HF7g)Oc.*5X@U`nrd;-iVmZ15^ -)WT 3r)RR QK3HT+ N1J(c&>NgkE+#hD@3Ʉx%7}KȊ+#|aܕh] gV qRH6ZxAA]Hj3F\p!pmdYvBqfr4yK/c7٫k>`Dv{'^%{x6IO7cKu9qq'K-(y~;t>iaL>}ϒR{n7?o./y([ o4L[ٗ{kɈ\^uK]VE ,Rg2DOy@LO,]3o|&ya몋RjP{%9&WT.K z- _zaDf0W, @U _!7[?~~}2Bl+X0J`D$4%S萢+6J9y, ̬VA;=G:QhMv3d$&xQkQ-fotlIʡe@j弑uu]9yn^k\"8&eG:T` )iYR딁 Exol~R#,CN>Hk(vL|[cγÐ+1+D:9n3`W5#nrz76MˡX^ph&Bna̴`J׻xn?~*,s8[ݠၱXicW6(SY8Sy/_0sZ̦ lZVR1|%gS|҄R'듴\RVz{dfx4sJ7;gʺ`vSfT%{7/ffs[k&++4rGS'+)}@. a)I# [10gFi Ge J AF8DGCJ9ōpiU"a{gqoR0#4'(xAc]lTOٶxjyV/V4lYhWw kӺz&V::T57M?h"֎T8^1)Pe%6NK[`l c ZF1k|dEQ)a &16%*+=Z3؟8UYeY |l{m֏{)1ާ:E~K<.4 X*+Nc#0 oa*Gg0!hܗUy*Xc&Pf(o0*GTYJBD2%89%x+r:Ŏ؏kW+_ŅLmOWL\oz>6Z~T-O+-c1; rK10 fbrH8ʬ 8#7$#]Cĉ IUpNzkTHx3jg>LLj9/J1Tk%i%N'q6퐒{:k:L׫,jK _yLj2vE~R\MDRRe_zD%,#\BGZBKRhI -I%)$gAG ɎxL[2"UNh3Ǧ`B(+PRD cQQ , Lh#7pIHeiD:N)@ctp /e&#QHYu2LwZk1ͭ29= ST_̞2*^ӑRyeoy: ]1~u֦9n잩+tQi%vȝ-'ꪩI@h5d nYvM}\7x괻̯sog^R^@|}qkLg&}VNÚCP:il4iKPgqxn7?o02oެmA)MzUFTYotTjc,UN/S?$#S4F?.wI!n'btS|gvZ}k ,E$ǫ}i((_ $}݌hfOu6,;p)vm=MxՅuS ̊.+rS"hf Ȧ=7;=-s=6 'wi$Z0bvk7Ei7,ry w-Ysó.!W7g"'׍"oxy0ut_$Swܰyh'2͡L7N.ImlM YF]ezsD́ LBc>Vˬv'm`g5DP3$5kmUq32#暨MdGg];Ży'/TZ1։褠fZw7 O*ePԼ8Ox3mdryPxD;D e4"R{L-`04!nkOZDYSxLNDjo[ߓZ`RLcӺpG)a몋RjP{VGCRn~Zns4рmP VS ߂澄ޗ2-C*hgٻvWyJ9IVqk Ow. 3WcyiI}{ɟOQ:$[mj5֡1^Jv .Փ]osFqc)*7]߄ѓwC;voÐ:) !bn<|(OT}E`p2Y_t.8)3:_ڷ 9ӽodo=g@V#Z[XFƥA#A*(:X<27NpaayO0Ix!cB.h /gdPS+Jj UWj.̒^D*h@$R1$Vd,0[Bf'l#p_2M*` >C}`F9tJRT+Ug։] B-g9m@X:Ma r`"sX0z3挘V 30k=f" \Dј? 3vww.|a;T4IdZk:,`sh^w}^7dӋ pYڀ1#gD n#Ypy9B^ YWAȗRpcVVRj9 Bܷ񗾘.y=r1p1k&mx>Xndj*gt \ *^U`v)lru%@ggOgqA#ʁ\Mg](WDڧh+Fq6´aR/ׇafqpfouS3#vV`Ɣ9wc00ymP42N ڥssI֤@dtFT,OmĘHsɋ2hW5rlr [;. Tn}k ֈYwKO%DV#ZpX5:'TӋcTe]]H'ڎ:7/=5Wqa_ ŤYfoP\*b>4t4Ork2|r$#32Ady2" }6N8l9@[AXoUq2msEJmH2%/D!'$\ȳʇٮxdu K{26OZOxsڔhmOsr<[n->Ӥ8h6TSyy\H9Zz%9=+Džʘ ))"Mt+pKtG_R޴ O|opgMCbmG?G[= ntv6 d$ wQ0^(vT)F-llKMhU)M }ovaY*7ॗr$ iRXP" 1a򤝫a/6Ě5 34@M۬4QH>`0dB 8+PCck5MkOM;k ۃRl!ʛMLV,6O;`rx1 ~bÎf7- )q\/MKv8_*DZD-h tc^ ̣m"&#"-y 2hbNB $&1]#AǪGv %.E:kk3r1+D sk YxGh\َ[SJķnƎ'Oc] ï zlH|\q#Z* ͬOu1wrb5O}دFw\3__,. i i I`dp6 E\.%H{EW`[$ lhqts*Q#׉ p\s:i8z+̕jꭏvsF0 4n N\iN\)%4s͕fvfP;c~F5'r\9}z&;:7j*c %x'ޯ?^K9#|]o|80Nc54ym6<>_yL1M^/T{Ew:zדEۓ]d>{Ȼ!k!xo);{ʻT}#A(OX,_ǿ~#?ﻅ?''3t!/R })T:d|tEy%LaITq77vtĒ< mi)J@`Y&ad Zi-#S( Xo_U=kq,wXC2ɳJ jBiK@mUApT*Hܺ3JZH i"-P@(E>nze+gt! < ei< Oe(CB{s ֐O++)Kat \8~KV7CQ¡(`i5dlcvJuHdmcAsyg.}%טAQ[> aZ%%p/쐻P;8(&KMK`sKCYݨY+gOyN)@ˋwN_pܳ U)IEisNZU! 1BS*;fܵb-\*/z .Ƭm;81qotx D褼_Ʉ; 98dqW(Ȥ4c<%W9ɂU 4΢&e hgJ\ q^B!R;lH/w\X e]$X]& m%> /v~Т1]>RјcCGGBG\G& À  $2:})ȝJ,KԨ)Я;,aE'7N["(VI2IUQ N@RHI3*U,6 QW\W%'㗾= M;4%hHg*/R]w v^L3rN9@l/O^NCtx`1&"΄=(t!s˴<٬<"ԅ{#a>8~D:9,u[Ǚ2rhpE 宷fA?!~ZwzҺ|{і> nќܐZvrFom7 ;oy ÝZnÇ ]>Q?w;.g-7.=?AƳ^bԜutͷlmiQ&[͏囃UrOHWHv.J$-(vJ$%Jr>`(%Pb*>ޡKXpr)oQԩ4> Oz:2 ?~NśC}٢K_LA_ߚ1m|I Q)PhSn|L.CԖ@\X% $n"S25HW=fFMߜ.2%ۼx{v!fv !f&jӤd.1wvWϝPia.B@' }}-F}ڼcŠQ;q؟V87r77ߘ(d`†L񨳓Z(B }e1%!8"{khJV)M[f`ͧw>5{h27*M'Hk،WNVt.3O nAG 2,$/r-9Cwam!A9K>NGY#Cn%|6 j}pflkiT B/*^-K}N3IiI|g \W>M#Pzi1ugOz0#(1 F/0̝~9 l$q<$s2qIEF!% Ή\UΆH+Ub3W\)Tߨ_ߌßS2ȗq5z%:=vAx!A y6L`ۋkuv[S˳Jy@ͅҐYr܋QeDD9fJ Gn]t}2M*` >C}`F9tJRT+6Wg]B#/o%~8-d)XgAVifN 11m8)\BKUB:e}~_T `,SXD']j: V͡c=Q &3 \6`YD4Q`W< ."/GZHSґyu5 "+%6!MKWm傝m-!vK_ktW m&mx ˍ4YU^E=NK[R@2L.70YΕg'o0 L&Yk!dP@^b́\Mg]ƈ&q~l VS~E,YjrDO1U< ,\y>}uFzP:ު|2'4.qf;/*:`^Fډ]Yo#G+^vIy a6ff ؞YxyJfljy0}#ED(ɣzZJfEfF|qd{ q&&V9 Q!gh՞2# (4‚Swq1oWѼl={y:w}N*2KE`T3z ZKZe-1zZ/=F/Q)EcHn.R4~fPd,6gDUF#B*ᗦe^Qq\lUjIڳ05M@ 1A2=),|>]NKn~\a= ˽vdx{wf{Nw,!C0!B*Caj3jX$"﵌FMFSƞ*a=GmԮ + HIE"[Fe1TZ5F *X*8Y! bB jK ^Ȍ ^`A Ɯ 3j)YZxG/߾,=hCҹ(Ua'ޠ0~?}P.QE# 9cJ֦JIek\_$u&_6\zf0gp~|ڏJJ/ p%;zc$ '-D⭁Dm+btp$7W1Nh*,ڣ]UVKDem U#.?^`RPC=RS=FD`H5ZeKDeঃ#EWchk*E[ %D%a\A8]"J R]:\%*u]ERX2ܦ3; *ZWZq.D-• U"XNi*QKU^ \KXV!#D W{Rc{ \G-9\G%0R{K?U"~ vpT7W)mϳ@f2klbC 5* 75؏ӨY5,ܦCڼM?c_*_WF9 ~wuj^>.,.Յsc̱b6A+jt{lK^rn `BK 1!S''O*hQc0͢8(gZ'545IL<kYa"{&CIfh v'p b"XZYh6.w}lO8I/.ً <(٧ͪ 2^Kp>*?k5ƦtPL\ev*dAJgp8uJZq:eA'Y*dLj4EJ-pMMvdhe-mWzp[Ag Y}qH|Ou )%7o7^d~4tǏayd5v@ߏ_A6foR\V9% %BЈ f*!P7((dQ B6 7Yv~9_ש)X~wEy^2JXUBY#ZK^hQ{QT:a-Iۻ:d9h-wfr._v-?Z.4qZSCZ&o+',~kjy A_7Muņwvamsӫ|y?l#zَE9 oǛޔ0rAO$`0i.MņwƺbM^B U-`|T$r]JeV\#irw4KGYW:|(9&0سQ'5լ>fE`Ij\w%hzHE,p_;\U2<ޏse`&? ΍u>FVy#k{{Fr@_j z_~A X^*{]4ehJrB/]E[+$'u>tz0'i(ucVeN&,mU]LAîY,0$߾&}et `1Gr7v_:ص#/b( 4a&и(ų4JYtEb/ǝsȼ9OQxa{L(։ZHKcܱt.^Zr-yڥP6mZWK`2 gݸ=7}|Z=M=dx品c~K|fS;hґR}sqǻa.߈d-Ǒ\E Gr-&]dҿVL ʤv],njek**5QVER^X˽5KhE&DV,|r4(T,^>bc~\%!^cϮ@{/an^/#S1Y_nZSOx@%c7n]$y"%RV;vӛrAXF 3@\kB<*`lo#h,D@30 Fy2a# ?N4՜rЇ^@:猭SEK~0P!qTERm\Hj3F\p!pmdYC9&r6pv%EwVJHCْ٭BOIۢNt qrՋ$n1~^\cJkX`ު>mT}A_ Ο}o ~ l=W8+y&SKT:LEk\/HruDe-_.FXIu~^KY#~Բ3}܏Jqap;zs~d?o3I!Q^;[0XaC2uw݅1pnqIHe TE=F# ?t&yd&c*Caj3j7ZFL&Z i|x݇w_Ea}ҠtXm;v]ƳOSZ=kLVҦ#!uؤc]^?5?.|h]@ru1o{tG6ς}}6e-[wmzxw畱ҠZB7xa%G7yжw3:ѱ7`vW'Ldevk.h]11p6~ @'~T u˭|l|==msL0SEG@0?޹X(1Q{ÏҨQ9J$p[W`%[W\jRWJ,;zp%h\їaVZ/5?f9%SN1PW0QaVØEw~7yf*x׸A1ަbnKE5>b*qh27Mg }krf:xI>séD rH# be}0z)#"b1h#2&"ѩ{geyw==+ зD v=1S7gv=7fR9.vmzÏQ4&yozSòNn:H9Em Ȫwn6*aG-v;7S| Ns['R$sdv sŲ$t\ݜY:8P< u2񌗟nLXcI[.Rs}Ebs"390.ty𠊨PPnCE8^<&WngiJ21S»#PmuP䧭3Gy O_*[gV LRh0A2d,;6y>B1Iwǁyhboɫ*6DDӾ^}U3Ccf-jy'N2<~"Rx&j.2$p-`04!nkOZDY[s^ C_$t9=,ygHbvg&mDTSq]=&mto 6wK=s=6LrsՂ~6 ? G Tk>` `c)gKbqJm}r֭SΣRQ0jֹT5,2hM3c5XY0P8*lHiM]ǮpQ[!,exI# [1t.4"@>@bMqhr޵6r,B)A<޾T_J"vpAN #>y U,)^俧CWHj޴mX4=3*p_*`6$Tѡ@EF%XZن= y B-<=7Bba= i붵~n)k@Lë«y]DSt^My4[*XE S.f7uL4,ǀ94`lG cMVzY \V f,w\h¯y\0^P*Yyq;)!XF$K8c=COSعOC\qu]" ̍Fȭ4X-j{ƥIIGR2٫@WYӯm)=sK9~*80{ fB-K90WYYY@K*򼂅>?w\uY)ثYȳUvG.7=Pg9"^TN`]; ;jap$aE+'JU޼GO#ME.Q)0,6h0֠PiZ'A;8ZFmM1{¾6f aČ^DIQs- \&#U2Vge\Rb B[f wy1gCY^?eqA߆'b0 $aq\Lz/(MꒄkF`NY$ P2 ɖ[Nkv=a6.xh`rL)&HM(8,s54˦cMޑi1ke4 rQ$}tdC .Hg8Bc+65MkKM;i ہRlHF'؞^Il'7z37"PtE3ҙwV 'p; _ /M>?oM>DGƿ=̎Oy$}DŽ"pv&ξ H@ʬD'71t9 6}ʤXݶQ-͈tn΅hbn\烟C?NA陟'v5;;}fs?e<\yA&7q4_nFލn%<Lǿ^r"Q\,O??/gK_~l@DAȒבL` %dŕ$Qr2 bYo?7u~eKPe&z\?ƜoׁߝׯGj;ݻrv;rשO2:u>:eQtLkѼ }FCP0jZC$>sxB{Kj|7y.<&@|Z  3`\s&i$F0MQ\9s%1D_ʍv/~R!Ou0Q*0Q4VӬdIw^t,ӧׁy ~KpQ_Co'b#R:kuj!4 BZa\X8sL/R\H\RdY)oAuT,pZП00kW&Ue+tL:GZӂ:G_`(}ū!UO域ۂa&u\b= RzB@ܕB<. Z ::J3@mW Va9qptb'ۆ \="5Wط抔mkܚ3 <;|xhϾF@,Ӷ+[Nu3a[BoPku\{u\muG0K>(AgYBUR>EQz=u~&J{<]dl8SJ.X?o|C/XT\!NI:L&dLp\ 9 x<{ut)o:$ki&V.6ِ}mw=AOVcm901H=g@PC}8rΒLu>Yy JFrqĂ:X@!(dAѤ2*d.'q0^B[!fծӁ٢P2)}9JX֡N ;SZl:m3eo.P: qrl@Hs::7㾀8!uR4j 2:e|i;XV`@j9[#&xgк *`tRLdj:jIH ^)9 9,tBxXd8ʆޡTVYIz^)h@$`;1kT_3Z g!v@[љF9l%@ ^6xy4:##ijcBYj%@R9OMₜGuGXUxY^ ny&, x:ntͅ\+6\"]w Zr+;Ġ [L;j=yҺ[=uxM۬V~}oNnH-;j9muWw7|)z;/\?J]_pی}?Q7xZ`zr O]MCsoifa3oO|HS]p509a@ *˓U* 8X"-*YR-VUBZ&/%n[dk ¢<ŷ(TnC. E/&5.td~2Q\)EOqpLA?}w|$Qk602,1,A BgNC8YLf U]z_']&`6|~}Hmn*>XyehOgJ6m?߉+q `˚49JYKhno t͇ -:&>mq{Qn&E-6,O`?QM/4lpY7 ]WXM)󅡦_ӊ!Uc)S&^4ֆ͗~36BVmkظF_߂NL)j[Zou4c$8:"lG;ʊL- ܾ+SYwp7ubϙӆ;iJ1DWtV-(ޏ76q{1ɔ\ǡ%wvPia.B@' }}-DO pYLI('wQӟa-5mԴeL|{cJ\Y's^>OGZQͨzmQ-y %&ku :PdE%EoF!uJi kd%Sx"$]V8L`࿓ґXJ7q?>Zl!7\gWob*m0 9TTGFr|?ҧ P'$1֡0 gˋ 5h-#ڱfD^!'̒^D:RLIc6IX( *g=' M3 &Ut#8PfQ `5q!xO;Ȇv}7P tgXXO>кm}5[N=>03* O(9P$*⪓)d^5ؓ)@`r wu4t|WWnE S.f7uL4,ǀ94y>R[d eb2/yǥ6*l w8ς Vw8M;)rSry9Q6MӔ!v46<z&1scr2 VqibҥQL*U*{[c%\#:gasr!TґR\e &f ~g=.󊠙կ. >E{_vfܜG̊VFBPZ}s_8/оR6qhIeN,Uǿ+*sJ9n~GE"ݲQbjVH%1;LH>8c*Ʋ 1aNg_KjCdSZU컏vcŨFx=of(@۲dl9#c˰Gϳɰkn]E(}1hMAQvcWK"P@B1վ'I,gZާt[ś]rcy7&N.>=88~d&[QZW(-2d9\\t`+R'ՇA. PPާ1|L=vx 흞F_GeL~#m+Ϋ+4g7.^죓QcdA5QTʃd>'Do}Q- ^~+sR^~Kk&Jp_y\~gW;?p?pkW WO!\}\6s^:޳]}`&V35mY@ :C_]Vzv:z >(uqm;zxw;α?;{$e)9(sbTO?^_D aS!|S;fL t-$sg׫'a{i"x6Nml%9;FNbV \Xyu|-a:ݼ>mw/>߸ky _.]wc=9d8P7du5YّWBҮD>=ŽG[ ~[ڳqozg}833=T-<?^, 8[%6ź3;u5hrʬC#Uc]S!R*rXc}ʹRLj^U]Nis'Wjeߺp]8Q=3dRWY-Z3i-vsZ;eZ #m޽{E5K[[j6|Jk幚nL֖2mmSSLj9 "{t h CÛ!1-.o({pQ-^rJx&\̑{}ID QRmw{c@7 P&sP!(4P\]߻b)ރ%v;٢0Fև{ l"&\ aFA]:>m >MG^%cNօ9`O/Ss;7o=fU^ٙJj[TC*)IW!s2J=L11xeFTh4V\ՐR{/R.Ic :`֞Jwb=5$}m.5뎼akG)/V ]dD9e[ݫ N7h < :^vڴpTgu%ʉs*jcFW BLtp EW򔱘k:k4'Xe=Ҽ>;lEPilG?!n`K5jFJjV\`Lhld4X?.K9_jh^EO-V(!ȮHVc@+#7CA aALAAX' ( ]j,,∛u0wT c 7đPI1,'T jXrPfDk!\o h2ݙRPHqHqƢ+xggmRch (bu+եqЪgd0-Ʒz˙t̜&dY2.2DbQ{4l {xWP>V|p.iҴA7(crm^KUU+f#A1!D͏ȇ(]B /ڄ{a8 ;L,Owv~_jsUՂY- ڦ`P4gM:V%SCҕjJUe ,Ç(yCs ~XQ|Ƣ VHDNk fO0#icccx }$H֠Q2:@vdm^XTu~V% 9ՠjK0-W1jl<ܶ06c|ӧ5:}t??^[y'KS=1ɘ}c6A@F4=Rn&?.٘U+ARD —9t '3uQ@R@"8nPW 0FϚ%l-ѧ!3Ўn<P9!z-|B bjwX-D;f {{ˎ:V٢ `8JDƢ td.;icm \g 1SkUX(ͮɀC Vjw7;k*jVXT&*BȺP>fn~Yuݐ s 0ݮ0VY{نIʣjm#42+io-JRc뭪#O}XH *6 Fx##Kpajl ڪ F@xn~d>lpzducI`y%DqΣ;PrьѓҢ:4'v-Mg'UEŨն[S1c5M/ yHY=yh4vM ƴ =:=6"|YeFĪXAIE!9sE6G=Tʍڪ-hONJdtLՃ`%(HȦČ,m PWkQ͛F1lG4v"*TOc7 "$6b]3f ᚛,;XYy/aUèƛ"@pW 3W>8E6VAP8U9P#,3U65EO(+Bj)3"Q%q45(=) FPrm⊑dpk 7O4zul)Bʝ˥!1ˡ옕G%n IF 4PbIWKlI҈5g[Q'㱻Bw\9+&ۣ^þ8Wݛ$z`SKF<_x]5D'V̈́"/,2B6Y'N'sw`W`|_f\YskU5nv~b{̞ d31D ~m2h86/,?X\t`??yhU_z:zehEXnZfnQDO4{0~Ug(WÇ9-\O\[rV1>~7Pݣ {t?}<,?VG睰c^ܻToX`A]n|dMCyq|f~}w֑1^ӄݥ%7Nzw䎼vO`WŅ=ذC{nGUf@al&X( EQ=[N7ocm;vR_hh(*r}vE"%mnKʏ)?:Ӎu?z_NV޶WtۇFv4]H\UBZBC_K|Bg@ Ꮻ?~ܣOׇ@?Z0p`O`=\;-$vtٻ6cH}lE l0D4IY %ISMǀeWunXڶvg13>[¡tEm9TOޗh*O88]Ւ˼r.tJ0mg9R, en0rK/a 00͠=# GaK-n8W_Û~Z*b+N_)aeٴk# ZYw7?ׇ`_ZqʏKפֿ$uY q _yMPhHX\ʖ b_b%SEXJ!^ؠLaStbҡ-ЁϠ͎fvYr@k/Fb ! pR ǕLRNE52 frK%Udq0䬁-QHŒ\]Tzg33_B͸] ;bc9v&Sq|Jx;M\ ʒ:Z.f[^?n{V l]ࣃ,? ֊&N"d7%j=D첛&Rфjd^e( -PRD0x4up wzLh#SYnqIBY\e'ҁ=k$5AG0a1Q47pU5a<극)fPa7\[=:Uq Xߟgk3 h<+Uk]Dn%m:r7a+&mۻNި]L.i.u9ԺZos8ʠof!,eQsn[G=/vVyv~F[kQ{~-%|G7g9%Kmғ`6z RٝzO6ӁP$˫%]M'7g)79 {$ Ⱥw.#=-s{ GWͽ-v[)JѬz-~n3|پU'rYN27Hf'`N0 oEJn;ɴ9yv\m)BHFtꬓO~d3އ  U`ha)1H_w]Vj3sX0Q QΐNjhjX foY蕔C5yynoKMS6R\;#(tQ %^y.rn3#1$\eV|qWjn=[|yzc7nv]b~,}5:b:J +[ Q„ {Y%FѪto 62( F刊`) NPB2%89%x#zr.-E^|umfl){4.v@| 4yp.EǸ.q3&)np0sH[]D‰TfeP:ZeKH3A0ɵ Io8x O=cF@1gE)ͥ4.A r Rݷۤg(|!["A2b˶މTPq[hiע{2sAd!(s*2Ӥw3;^@]@8:Pֹ9O%ף 9|l<u(1uJiReb$01Xy#"RK=jĬR$,@dE؄ A2IRU1kQRi,Fef#gϽ9MxTձI(L+|u-~Z?"Cl49=uS7嵬VU;r D3VDnc Gi#ld1`ÆQd=2_up)@543_.jHHd 3" -XDII$ןq_*%C,0 U٧VO-7 )1 (ҘHS ZLcP "ܱI0te+6McQEW) #РG9X Q:(: R^A:zew%]vlz#;?qA"UTe|j!|o.ERD*z250 Izu w%a ~b0ޯ՟?<4OS%yxD^UNECo͙LiʆeX3S`)"ctq%*]4tJ" tQ?kUlE-6`M@ZX.*l&W ?2`x1qQ~d|=2RwyzcS*z`ei~+5ћ7AG&NJnV0d㩑 āO|jl50R%02ujŒkz]28@ Ή5qп֖.Ն?`k3/ aDo#ݶ Cڇ2efypT(Xɒf1ٶI:*AGm&6j\%lӴ'Is`&.u}7{)"~/poTm鱪SMt*'UuV{?v߿}m>G㻿 *vZH=Fw?|:0^34Ul19zWr˸,bCwsZ z?v֟T?)d]Mլd盯 .JN$laܭ/U qh v/ X1HO)‚*F10Jo4)fl@ x?m)2K?+gWC{Nj`LIf<ǹP`+)U/S=blR=Lj@vёg4(EsZ{)9ʙ (J}vXnX G$,;ግRs#Xn*&Y]둍'c5z@엃s݊:dxk"ڤS [$Jz|ۻ<)R̠J8C╊hDhʢC2/ (SR黳ڻƿ].݃0m.AuCo S{=5k&;)`\#QHYu2Ltg5f pDk1h`pq7~?{Tm)*fXxdGJNɐ$c2j0Tj+)>BBguTCRE.(? qfAnqroDevT[DU*UL vLRaù8;)ٙI9MxrKtE˯=xrVak;o/{*˜6r9.uZyw:T *X1՜c nTJ2  ;c yoJF(JH1\+}Ԗ@93Tn͘ݚq;J9.BYNU)JsݥheaU6/~ՠ0~ϬXh&G|( 4R5&J SAi\i(Fjlhd 3 j-$M*5V0/N{k("*FnmGC\jmYkNkwv /5(D-ITDE_fx4*jXkhIuޓ iU 1/ A ('8{tK;b<?E#N#v10 #$$X!/IM0LAt.4ں-KnInţŒ> ,_-"5InFI%BL 4P5a_l;kS}i-<5)5TWM-vBpNZq'YdG $ƞQ""|rQKBB-VT\(+~idٽW)ގqKgVxV^U T蘯bzG&݁EdWy̚D ZlAdzŜ0N%bAHdQIM 3zFNTFݰJJ. P,Y[Y! X[cL{MVAM"gX R^Y;F3oupC[i[./X}q45C23 FS+*8MPk٩kjR3jj*<#qEKF\r9;qUUT+ތ;z~@W"؊cG=zIvu u]?֛uEդi9-lVz]y5eE-gwv>Χvy0 67{!|bG}Þߩ{N;qtfrGƣYb֜Mtsۼ63CC=B׭~~ϵGͭ4gt+|M 'rs^ObQD odyos|̣^#q~k&օ }fVܰ:ϴl ߩaNΰ⅝J#/Ыzr^Լ5.T$lGX)Eu}Q2cЇA{ }_vO⦿@!*` 2ce J8HD3efӴ}uu"Dݳv]΢(Y]+K.oe>݊+lv Zi$fo5s+A3zty˟G~Unuo5kQޡtm|Y3Wpf˘-c1i-#3VfPOfKzz5q씣tQ:R]Y4D6t6! OO=LCZ)wJR t/ {bQl 8)6Vw=t5&Ǣ{KlrCY<v?OYl9mg5 SH!ΨM=w1li s9{~ǰY;Bú:iKb I$!ֹ55j=cކ ' Fd{G!6ddGBY@g,) qm[fmjFRӖ2ii|wʻkު:!KHتR$.^덬)xa˪ HS(w^7s3;#6hK3ͳN8\H9UH!sB#\a`* )|@u2m *tc}ft.)Қ@V0!":x e-JFCm fJ=qoga BX_'_/Il yTMm1'| D|zST!BK˿V{Ft!YpReS%xE@CO֝C=]uyԏuҦ^w CNFFert$ R?:>va$Dz(ceb4%Xa,DX9eHM-#5H/h7lBʝ}Lz3s,._a?w^Z6Wr"rƝ:XiHt60Ҩ0օ f\fOjS&5yBfy#eؿF%=^pC@1N1&.nŕV*](hLB"l5}6nKrzcɻĬRMZMfɍ aZCE?.s~ίSN?}-u}ՂdJ?~[,ۥzohMLBh\ aV4SL> ӧ;:+IP9,iں3i^̌km/-(`N̍^z`vPD[ݸ6ֿvRGdlHڑPf0b0&Y#2Vb9lGcvoMbmu6uK:*>420F$rX/ &NGrlҩ *;۟<5m?~~_~._/~ûUH+0Nx֐{5 |<mSCxCs䬧 ˸) Љ0ZZ3ԏB yޟtz85I+Uu?vOlbI]T1UZnb-B auv់7n=ҨբQGZF,WxcA۩;-py7Ov=]cg$iHp5fedƀ ڊ?5/D?r: +jIS R*jz-q:]IP&=ڤH :rpL>W5;%{ L׉vE!SpI<Tp>k+YAp['T#N olO(91^/uEx|m ֣I8:NA`o0u BĀxx a٨%WY$Q!XY- µVkۿbN#AQO;ph,q3QO[3bqMy(WFѿq{wr1 b棴TǁTQBgSǬkԹ1#j%^ǬP)x[;cL,kHZ5q, /$8:`J^ +ھB*h /$UfJdU%Lcdz<`1uk,9E2mC)uF+Q l]`6rcxNjj2M*a >C}`F9tJRTv4F.j|2әG0GǛ}ı?\mJpy~ezLEcmtAVb*Xr 6ƶ0' c$S pYڀ1#gs4hT¯y\0D^| ZH([-# oZdnL?)7Jv z\BK݄X.)bM7qIs6^r#3mVcWE=㠓dRVt2_F9FJse.:RdpI&IcZs`ang>.lylU!7ktZܠWq4KEi]_NU.N׫KL-v6xf~\dgy /QDꂒYf6I18-GL\dՙCeJIּپg]y)3GnL" -EiuN8l9@[RZ1)ۨi+J ) If姘<$7Zc+/ךа9Y4]jHGhs {6$4fJ#;,ƾLS"$eb%I&+++2+"2xzDdߞ-p!=[K:])<_v2uSnXTMQ;)~\yY0 H::(7BPkKIjTMV&WYe22<$XAQ(J!$Hn"S'#B2JX[J؄Li89f),1ؗyhy.掟bi^b2gO|uOtbҀDiT*T6)s{yQ5Oې?122sb&3h`d1t1psl7Rvcر/6 smrV݄2H-IkmK*4I WQD"Y (4 (L]zJDNdHhϣB^"ED1JĺD lӎK~s83KልǥpDrĖ#6)!RQ&IEyPRf.8!ʈ{Tr(nw6aq].ҮŽ@D䚜8>,j<請숲ҍ; O+L/M2 T!mz˵J1O3_p0 cu((:Rnh^ʥ.צ}}gEao x.(aHyY𷓞80m!qσT:xeͧy4$L[%cx&)L N(k(6W ۹sa=uI©I@e$gSG'"|I:/}[t_(N/]O? 3mLh1;jWTZB>1@]67)^PEQ~T2J |djF>lirO:O`>T{l MyM|Kn.H&]Dt @`M%QH.P \1 Tf&/ ]Bq}BJHZD.j[8 SZ{SY{=dKhk'GFJB[47$>xl |f3tUP'qUf ,g[הa׏9'|{Lg2l=[+Й@iN| !~,Ѱ|9+P\" T(Lj@YLZbe0cʥz>M*fYT:a5AQV{I4;DTs,-c>\ pMACH&sS^}J#5pcP` ʅ <2$@< G[-y>#U&1d%jkdمVz˦7%h=xޑCK(cYڠl?OnV usUs 76ʽfAhy=A0ǧ,;wptdq;-lVz]y7 qd#Oi{_<}N #7C._?65;Z"j-W#6\Yuٱm jN?s&=:sJ}%ZݼIݜBo6C  pqA)eܳ7ܳ6~WOd7%D`a%a5,3,ҩ"S4zH7J^}o" wuFה@ځM>_uܿf}z]WcݬLߧ6[e?ir}TsF^;Zr]^K$^MK7I7TA҇*&h`.g#Τ Q娀6OҦ1طM)kMB.W]t.Њs^^A22R6TѮJ8pK(q]1 B 3[HJE#B8V>om h A輠d2!YDs9 +P:BI+|6"|"OI12c+IkSK#]4kMDr)0LddTA$BxD'!dz"+P%Z3+a`2w٥7'4'9%,r GkYT'ٸϔuY.>v%5 2o|:~CbitM.޻ZOͯ[؝a5 ͝ʛs/uCF z2jP>yQ[DctIx"yD*JWJa!@Ք2y-2^ +C)T[ ,E4]Jai\-Khﺣeҁ[AHܲ3d5bZ+n/KJVgd RqY|TK&PGǢ:3Z `A/N=/eWDC)pɃ0]xwoXϯ&4HK~fERN(FPd1OON*}*m <կ"EsB*S8L`QRs"}쥌{=|Su{r"ώ8]A.{]zW34N2sW506"Gt<P!4oz_iSpsÏ+W͂tЀȠ֋$SDzItu.ąg6 !$$^{%%Pe ńnuzgUIho.%ʤ$7ۭ1ܺZCaU]D4occOoMT$S8o UdQG(WQnwFUr]k"9PfRo5ELhR /EJ؆)1pSZ9qjj6ͯ]-}VMMzEk|>4xMlFOȴ -S䦍L#*R-7i/wpϴTXכ蒮!aF J=>XD; D锨ăCx mҁ KR껧՟I'h>>,ԀK/%eڶtNV3=fʴbqQojV$?e`GcWd6H,xxKl" y"7wJrҍ܌obյ_ 7ZOHt6#P(׃+^pCKԿ(ˢf 1T$e}Ӈy%#v wS5_ɉnoIWG!$J4VF#^ZߏxBgKߟ#$Es@YZS53͌4y|_}yQ8[dπyt׽-7]0!y\?ls{ yr}Oz{ ̆-"?CJ;w1~eѣ3޵#"`d7ӳ=ˤ,h#KnIN[6 ӑ7`,G%l!C"I'939O1|Vp*;GxO<`7^>;n;Gos{6Kl \0&IJ z|>qq.MalH1dutMYl BB݅w .+yRw}tu{nAA-"^4QO02#dÌ7ܦ#;o31Gwd.zr:nՋ'w+BpOK U.1,ٖC r8prƅBmFt=U $P."6$ גbrxH“h@^kfBeC9E4]fLG(tqmrus,k5>yJǏit|:xGghǷZĝxr&SΟmsGkib~R\+ow[P$"G+%w6 4Qmyv!p[vUF+cbFzR(EuCq$)*q.ˤjkjl׌QAta5x.4ut^u-Eeo\y~}7ya/6q^YchBȒ'Yq i*"7;&&Q!0XPƄcUP?P&P+5ب3{XR6 rHm*kjl~4Khj\m+kminxcШ[ґEmd#^%dv&[V=w]BGtȶnvaJ$D%!C.Ȋ4隘XPE1R|2@FuҮvjlׇQ_Ҿe?X?E#N#vJ?6SZٚ@n #J 0i.B)Vk]uvvd.[Sf_wwVy{%҂ ٪IEAsNZU@ F{yq*;GooAv|T![4wn{ |7o$1򄢫` d ]]أ watc4?4`C#4׸N&!rc1L6>x%02ic:a1X0^8+42x髁,ӱ\ B{Ie׭Ӂn= }!._94zKjHa juzup쌹sB28? kPV~tDHc#2SXB 0uk3}meP 0"xHN["A$djgUTHBSkwHIᑆni7 ۀ 'MqH:Wr(>;mZg1 a RLk^owQ^jho2Ze 9lP^v𲃗G /Nctx4Qgž`$2m9O6āGuO'5z` g> l, x~nz~ >uwDn$m2 S˶rmLΈ rx4f6z:-ގl ~l' \}MwO>a楑a<motC\|,x'2qtr4|!.6\l1tnm6wc^>)Znm~(ۜۥ+[~2|-Tbc }|H[˕82\A۴ƽ& 5>:p"j<Ӳs,~E<:K[, }ؗCA}G^%B`s:`xP1Cm6!i,B:DꪎT1*%Z,RFn|L.c*p3LfVvV#gGGػAM{( YŸNs2A0ˉ+J]e~Wlh&)&g+PpPL@Dg  L:V4B#^q:p`b29iuZJ1,EFc ""w,+WI#=oʏ˄|LzVwЈ)*+^**e1ιfkPҫ`B/&,bȎii8^Y@:SGOkUOyޚ :;\ $Zyrd* 5 Ot6#:a橢y \waPAN=R'/Y 0.cIH2!@=DI'vv!9$LjRX?&#VXC52+ Fԓ@QmˮHܝJx/<x z0}͡UҦ!e9IgĒ ]ō Kgtfc55i5ˁ)X R M=tm2r і+' y Em5*3IES4ј{<"n2cyHwcrb'N]0G 5W// 57AF / xIdI زHh#do?٨ |LomkU^7cm_E}yv[zu '>0J.@FZqWj.oݗ}{v5{ {La?$~i=aG[Nsj!bp6å4>f4SA7 \lDdRҩ肅YilK2%ǜEJ" rW~4 c[%gr _'>g^.15`JpHr#7mFT@>F; v S>45w)qQhzqAxg2HXj69<j}rG!5z J-z=J,Vg Gۧ^x]X7輦v] L WYOd2,߂qa ~~%k5U]%p}4ˎFKU5|E%߫_}~w?ϟ~~?}[|~_o?*F !0ǷpkS׭~[s [ܚ:zW+yKkk98/0 Cnn}m&Z,2]C/q_.F~vR nJ Ѩ! N]hQectWMs7枼pCI Q$RF\`*Ύh 8Bq]9p`ĩ:C89(_Mz6ߌ3%6*Hy 8VXȯ%e3ZsiTYmcb@p{[ Э{CU]QB wJ!6qOfu&Pt|5H569m(ךY ۥ~(zzk[iuj$h9]kM'6DB-ME Θ λұ"ӫGҫ1j׫jf{}Yyd=\\ c  a(@IR8EmA7w2m:.qMx (I8 :G &F?bBwթ:3YQkSVj;׻/_ۤl_ⲩcOn͚ITiqP-Vm!'Xwr6zgQ%' 5?ڻ*o"3AzU_ϒ|%{?FiGcft=^@E]w]` O; <X}UTThap!H"hai)<1 Јt*}[9W#!@F 9j>z.|BI <XC#w&@8S Rpc)trȭq TnukNgye xqxSBfͶ YDZL5ժڀ_z_W)lE_!01ӫNxVu ? /Y#W\DW8*puFbuHy%?~;voKv0nx]hc"1MP )K$ENmrpkUwRk[Ҍ{X$Lnv?ehQ!)P)ױ,W%r%m\9 %$W)\eQ QZݲ2,''E2>9-h+D<ŸZNѾflmp6Ox_c\B8vngqoqhHqt|S[Eݷ)i2+/eypd>W! g+9 Ȁ[GGFjMp1ЌihIy.FY+3JR9_KC!Dw/c9N AR-lBk׫pam#c_.5s X/JOŪY&7? J| ; O[CJ*TΩgmR s{yV'f pg6Ʌ 'ڠ2L>&╮kh˦vmc_65XSF qKZwK*4I WQYNY.>9M%'SoQ$i TЊ<*,2% ǘiBx4u\g3Fq&Ʀ0bm)h:FqmcDL:XOE2@qfe4Q *Q9H!?kM 685I*ʃ2ϸ༆(#hI3P* >G՝ .8qP/D#Uw\򌸤| jw#LE,|&7i,jwF6{7W騮6omB*vE,y `qRC\P‚7[K,q$᧓80]![ ύJq՟³p8<'e*F$ A G2@k XU[c#ZItQR +#I"Bŀpl ]!Z#eHA-NԌn?2]Wnh[ P.yAW]AGWv=UDz/H7s6.a8ņbK=FørPh2E/_x8- C]v|XP|efTs^&+Fy;Ϣg*ޜp:$Rp]ؐ F)xN<5;L(sȃ Ͽ  !$HPLWtTTwSOѩܦvA5zq w\t JPjdp ^kfE4 88o Uh2\fbt&Ժh^E ZDWl !黡ժt(^ ]q#-+ T2OW%aiX KZCW.'mtQ +Ip"ʀo'F]eDGW/"A,zyTkCV WnS6=Fp+T[h:U73Jm:~4yĶy?V_eOJͮ[E5A!x ŭ\ξ99,΃Y7&߻s=t3?FWFTU[O ǜm׀N~">v«wvj8IK9h ,x{,+d74}CJtKK\45lygK͕8~_j $G κ$ Ǚ-$%L_ш8k ')@GI$P|xT ɢ01"Ĉs^<( 4pg#ZS*H\@D.9438 2Q1xΒj}5jr8WlR-oѳ'żd2jEN|vx r)0LddTA$\uT)E{ M w^[[s×;ygE7{"^Q~S#'+rGŖBۮ\Ⱥr!c5ӏ_)|. !Χ++"@*hN-uNgHv2y&7RPG>IFb, (ǤfVZJ=!&ji.*`wItOZ#PȬӮfTRVC,ݪ55"efJ'.O^A-|LGqqёԑr)WT]( 'Ab /VhRzH! YXS!#IHc'@#ax=zѲru{H K0rE7VmDȼQpV[#H[w >FKЕԭ.2tq.ΡRO)ETl5] `ut6߸qXW{I#IL .1ƗB5l5"  dS)iN^vʉ&Žb$h%jOy` |d/oq,gKV/=V[٤c[\ m2qW6 WГS5xvg]Ys9+}݇p$.G̃ӳ=/㞘@&տ~C") ($ \2Gi,Zs\-twnmV67mS??gϗҡ蛚[n_QMXTkU*=LeϦ#*9fVJ zm/_k[B(.62kzpP37,Eyt SR`h|WV-WSN'Q~$J>umD+נ!Uaɤh `$gE$jɥ8&gR Qk602, |X7pZx[E#g*RM@C4A=ɢ c4 IfDUF/SRNWNnoMYf mM)فTBp,y IE!$)l# V%J"Z^q/z"xq7JyM.#* i(RrO1`\C[%(v0PG$wNo+}FH>pq59+mﭕb_r }'8"24eg%ttAhXcgFYnCP ݟmށ"h8}a>kA$KTSƐAz +*RhM&&'dJ=W11<o6CG PоI aWO^! /}=Xɾu%"ekx)UD)lRD >b:(oQXM߹#"[;Yn^plTccfDO<HǃvO\N/Ym;*X6|)-CV~tP4Ψ F-d.R_\޴S n2Ǘŗ:ȷ'd rѡǦ2IpBEgBL "}ّkڷwYɸWX+ۘ۽/.IM<*£U8TOO_(>?ϵ=izz""\19i8$"_xv:h'>h'D]:OKئȧW!O3᭎t+2Z %$&@rΑ[.KA3'^g+rUtRYBx*Ɓ EzTI#Bg4-p"Tզs`ƚΏ>A4jj$?-3Ɔ64vnY=>, !vd)w2?tۥ߮]wy%IPq d:;pSL!Ȕc:g܃df"9$4M]rA3+'nId`0mr}C&9sC<1BR0G1cKPDm%6{1z _Y!Njc3a ,YH#JCR.s cӑdT'ޓzhroh:F*7E~Y**ns炡cЄI85^0oEH!FPl@v6 f0d ^$2Ϣ)rۍB͠Ltg45_Oֶta'XD|0J9MI ,4*01ϻb _ߛ\VOZS&vznzmӓ > (5+[Rq θd䒝-%J'~6Kat}28I#,ˆ@ GR9ش}Z5m0~^\(eOcRw%].}+HD;6J{w:Ui$8&KI< Nzit(RO[kZ,3#5cldr= m8Σbo C"gx4301ds$PGFtcAx($G׫݌9w-ΈQ=j]veVvYzH+)+"Nt}5#"~O`#:SM4Eg /ω?~_~|ϟ3 ~*HI4c?7 |7}CZCx檗~ȸ;}ľYolmHO__X$WOt95I+U6"p6.Y!^`+B,a.O6;]ҨSG&|.ARMRڢHiKm,SىT8kQ\ZWⰒyM>mwWF9}2^Α>T^y(&tLw]|އe|0^ ٘xg%p]6F B3^r&FBS6 6(h«>W6 vxKvש%Y2.~,ུ u%=2$~9̢1[qC:q}Hn .6H6=Oqr5V4M4̩]9 Ԣo&r~/q|)sfDx@ӵ`ALQe58[[]>(! A*HqZ,DOoT@MI!L4k0Dó%JzԗKk`]:H(##3Kt&̥cL"alFkɝd2#(Q7^8F pr]69ibRwvJZj9$l+@ FLclxKj6$o|Xfc7ALWkUu8^u\& ZkY_|Y'؎F_TLu.F05p0ad;=alcJlշ!KE e3RE_,`% &7BپLSizpREigοEM*jvѶE_ϗŮ]Vyϗw̻d` TO"߫4trk5bOop$X pah'zڛ $;y&|{rݤS* lܥ }$.2Ct&sBNm"OzulqUrkN>?"r&r*Ev%!Ȃ`g &;8hi\0R,Ǥ]8QpML]L*[I?+^~E1A 1B*i-pc3Mg?i!)iZ aiP>`jtp$u^ґMaQl.G49oKḳ˽ڞiwӨG. ۦKCBRJJvgA4@WJFmMF!bb\I.xnsoj\$jKj/w(㡲VO^u]r~Ui[t?Y\zlEƏF?Ķv#?MK H-1ߔRB2:RaYbU-@P?ElJ]*2 h+dr 1e,Mg1n("QC,m/{5j!=BZ]lK+lUh^eoc%1|KrV#JU"dȁ(!YE-QpL%-BINU6UW1`<X>E"^"qgL)PLu}*;[Ϝ8dNCJFC]YΘ6@qR pD.r&`NSd1&-Ho8W;tK;YJ(Ee[3r ţY Dו`v=VT/V;t_Dp^ȽVO&t`x9/KD0}IFcu^ݲB| 5޷1a3~Q<;)^ ya5BC\d3y^)^p,`Y`q.PQwژE Y6ך~tW S3w.i›Q|?h_v1P vɺ A픆ȐN,͛oDdaJ۸_җIsKUL劓o\*2)sEwsD)AQ7`RGTRJ-0 BX>@~ޫ-+M.뺰.v9P_l3zjƴ'0Rc8@~ ?—$l6HTfSJrr6MсG?j#%[mħwa<K76Vnq't`3)iaL R`r<ֈqϊpECNAEKAZ_X:,~Y$]d {fy1Ylբ†f:.֏O3?n޼v'L:j7x{ oud5ӝFcq^_:RfDBUX+|*IKخU WJ 7pUx_ :nWI9}+!7pj_*IKw,W\aU\%qUV2op|J(><J|*KFLRpd:ճ+)q!Of(1v8; NAvI') +?zGuRمi%NFH3z0%5}}zK^$tW}$yNR d<@t\4 w~ &FdW'k>(|)N[( #EB`"0vqL:L+8v b%QEyطM0߿6B=?[ ^M?ckkjJxqn08b9 (O]FC9{C9{9{Cg,!sH=t!3!sH=tψC9{qqqqS떌]F}񐷌sc)!ZbWJ]Wag%;3ԥ_5L<)sRm(\d01x{.zw:jE ƭp jE1#e' <O03X =y8 2&8ZC2 w845E)hߓ5 1P÷s ^/s6!ua~z>fͫͤe[vzOFRr:A #dQ,RmukęY9.L/xPD^@AL$]mZ^%(kY@t7Q5gGI+ain4ܲ(#?ӫlo/KǰN\ Qƃ1B(1E2! P>bBZ z"qw9$PX뎇\c3&CfHFǤ"LafXN+fD00iH&ނ u>V n;HUi{':MbQBZTҘHS }I-1(F" UҤҴłR#r@pDZDZKI:+02D!cP0>ہdGW) 2РG9X Q:(:z!|c6DgPϹ!v\vzHtvDè6C HR|3kܧ(Mq3,Ur&#?_xBģ}()|ev!V D0Ktx.*0\p xqlMUi"2YiC(XK +믠A:,%1SߤmFS$S*.M$M+A˞L~htqpoI8SCV*(rͪCai֊Tg'T9L~(Ԥ. ~~߼^>6N-"T {uR=k۹"j {Moڸ;a-)"$o/bY1dy1+*4?XDׯ0 Z0bZդ2;ѪITV:RURfM:LGa2GRbN\!.53tEouf^pˏ0O~8}g|볗??"N0U-0v X_6o <m MQ4&mֺiumVyI7سV mMjQy˞dw5 QlHW/E}ɢoץ͖w+Q!G ׷Z+uE6mMdM+Jk(EBZ q.'JJUXE5bqt0dV|aye5}إ=[J3A:GAα0ܰ36J͑#>߃mmLˢwј`s^aW69qRkN6l#86kYncGl#zӋfW7 iLH_]߉II̗%%#¶d'̓F7aZtopLP]a#bnT6dYgxbj` :{ BXՠ]O4tEmi˲!(@19}"b ;']R 3#aZ^G$/Pvxz6q/—qm@b4.bFj@6>]\mz^׽x0^C_Wm"k@셧fJ-īڦ7Mez5Noz$g_bNKiTGTj@<6~9 )k!FI2ZsM (N,LW O[ؾ*9e,+*1QA;DI)Ĺ4٤̩bt%$вm_I8N$4z.{Sh`Jc:Eks24k6]\>%E7?``~v?6]|蒪# eաQ8\fcWW7竷iZ=]y v2fn4&9Q2P3hE`ܠT5}4NcO*]=w(2R[%KڠÚ-DBr,ԮG@g+ U6jkRݢV=bbzQLqmW΢=e _u* &T+YD.͚1zr%jr{Mg0}.MBYUM1K$o8l"/1 n63x%~2uqv~ur Ƨ| ~E,Y: [p d%CGlĜA{v9%A,BrϢe$Vmv%k |L9Dlf$w2M^&*)Ӝ.jsaYUR8ygZ'+_P򽺧c?~h0uļ,xVNCuZ] k!v-ĮصbBZ=5k!v-n] k!v-ĮصbBZ] k!v-ĮصbBZ] k!v-Įصba:R pڨ3N6ϔN^xR/ Բ >{]'t.ڥ[tknҭ|Kvͫy5tknҭ]tZjߥ[;4.ڥ[tknҭ]Kv.=˷{`_u`v.ڥ[tknҭ]CR=P޷h.1_L8/cb~{?av/4#&ܒ~cƜ57F#~">^(.䋪ToA9ۋW4- ;uĻex*K|Zε&dw|Wf"M_8SJEF0Ra{JOfJo룛܍XP[~-vmxwɜy:p`l3 *Zf83 th>vۈ._ο;*aFKJ5rϥbW@.pl5C&> ^1 okr6H]tA0@-B$Bu46y8Oxm2Ǚ|: Z_6Q{(J_ 7"A/Vɘ<8)E7 UUc)^]A 捕J+@S;lHl8ۧh~ nśqp⨼=ĬJՁI4]< X4sg}#MyA iْRI\*JH5Ȧ⣬]6'# Bh@526vdR,XcXW,|Qٷz~zHavY).ˆW?yFyoFl0 &#O[RJ 1b-7RBtʪS1T/$UR 0@4rD|o Ѻƈl8;Nsy*:ފھ1jڝnxT{K6dtiW]+Z1.XIpŷXĈ&f(hT֤,"V!Gr $ g;N}USAfq*;"vDdR)-57UۥB8f6)` ;5K4BIOX 8kPcJd8SDd)*;D%XHKZ g;">\gzՑpq:YqQ6|se.ɡgqEӈ,Z"΄nGDv\ . 6no=Ѿ>aܩv"1O׳@"{ZG Y_2=|`4 b!zZN1<dп {,{{o j|KZw^# eBEJJ> ֑cd8J <Cck|qmM4Eh0(9(s?bXl}j8)k:%q7Z{KoM2fyzO6}}v=Yb35'ESQH}S g<繻T۹xɼ~->Yk~*"^W.o^(.hXt@swŒH5=YܢS $yނ  (b3wu!ĕ_nk0<[\ȡS{?|t+zsC0Ӈ7vb,pFU-MU:\Υ*8?p4J#WIy*ˤwdcwwCu4:{Ů uٮ]e֘]=_lNL@ƴ^˕?e5 W4(mqhd`Fpiud=dKDDD@.{ )i#jaJ@U*@LjS DR1M[uza()ͅӧT'kU?Yd(MrB%KIl?;!Lq.elKGp~"Ʒ>/ e^:,dkIЪ-%[ADI!:UDB_w.) AwM&[w;0}ެݿ9Wza5Wb7 \]`рUBv;8up9st7P*H }^<$ 5(jԾq<@d%l墆ùR:Ve |{KHb[ΎĎ|pO/B2Cbh~hQ\J3mcb@~& Ff2/,TS^^W>Y1 ֻDkЬ$. F 3Q=uI!4{VNY m;8lg]>ɬG)(қ=WZW 120XRr)Ntze'J/`U ĜD )e([{%"<^B-[}#秪J|ZG?PtXJ얮]JyơM'%]Dhs׵/nAT~Z3j=m7wOssXUVC7W@%#3W/\)JU9v0fw.(Ûg[8 -!~ٟ޿;s6N˦ ~~ ]J0?AL .?ͼ5cнtë s _> t8>y?(T("k"jZRˁV*F3kT ayΠF*:,+CU\v^/q kL/B&@U_^WPք\`bf tP[Og^mj3/\mbTzS<;L*Z,Au<10 NrH8+׍Ty;-c1 I\̭B UX`z5iѷ[P&wùs?^1 :V0#) F t`w1/ 2O9yͭXV|MN*ɝHԷ`++3jes, V&7(%ג)E 0%c,֨ r1xԏsd kX7׎!qg0A Vܔ XW.Xƒħb&}:9:Lw lO%ݟ;ਸHB13<]'IO(#XS > w&b${S;h8? 9)o .!Y%V-βϸԏӹ-S{q 805[7…лO2dd #E/6ڄi P~ݴNfFjP^]3'n ot:}u'_Ud=&JH::ƚ}(u>#3Nt3`$D812 58O9'x*$Tt&.42mNрm`$Sbr+N&ի:d5ouzfDYU3Umq,G(Ӂ(gAcO ܕum+qpH7-bz2Ki6zZt&E <2@P!cX y%-8Zj2E(oU 9.l5Md͋6֌Yb1+mW/f?b4J״wߔG;r,s2Jr/JwSCq*|Eͺ&lO U|:p4y\tsў`rZ,a; ar.4"șdAݔJ2b]·ࣃ{Tq[w/nG뢍x F1{l΄b"C2^v@%H '%u f$P0z"'FeGFۅ2(:o;cƌz-#cR2l/锶Y\oF2 f5L|vRڒ;7pv {q4\}oܠmvM.ܼhZ \[D$m2t4QLBa&'@q1a,;.Ze;5ڬygWsZJZwzmoZ'*=侁qQϪ5{W.vsqoɱX'릫;_wjSܶyJϟ?Cr=7ߗo GGV: K1>D-=g+?ϟbKuǴskɔS&Xrf#ʭ7: y*H.BheR1AT28gb1$%§J^Hju`"b*al5LΣ$/6C"~{vu6q4ۭ2DotI+1 Cg`gFAjoN<=!exJːr`]Rl+I!Y~@~tCZ/@"uv@)J+O{'©NGN[A-?c$’1t3<xz;tY'A CwRxl SJdTHh=5 ,O+'uI/;prGaSBv!z5b̩ #dAJFhM@C贾5 QBYB%:ZGԣ@Q۞]Ϯ'؜ȖRF<)paEuM7C:xLBTS&N³ӔJ7Q06fT6f.0ō}r 5,} t_zɎiJ_ I6DknPmrgZ hRbTP51IUJC} %>4Hum|4Fܤ*]$i\Qwhi DD×B}mFI[mP8vk=QeJ&Fl,AoEH''v߹fK(^}X > w ^dH`/9%Z:((o5!nkOZٝlcxOzizq`vO'[&kEp\6_|jowYhE\xN9J)a~O~SOϵeG5x MA&y pOI^=:-ld q^)Qf~?c6K;輌3$=UZ %lkYg3пM6U;My [VYC(6Zu}dY} &@eV10kܧ()dd\x}b0׫!"A|A/Z-""!L$դe'X3S`)"'#tr!*$\d F$u^}W_%Ed YiC(XK ߒo!=zc0&1fuh|k;n) I%_>;;^<2S0\|5jC?9AϦ6a-$?BBrݙS:+0#l1:#| bwqY[@]OcK31?0D_:GZ6 Y>̊ [PDFN>ojFOn]&'Q Z=j=e.ݲt3B#i`_ñK]_O~,EO Άi=;VvNޠ:s%w~}O?I}?yxoT;4+kn$GEOW Gڈyp׳vll q4H5>ERW(_Z/1A3beӥKZ-.--lpi.jor]]\C+2ܮgap9 ,"?)Nw&Ƴ Mz&~99 .zUFcB,a}iĨ}#=D;3;fo("mJiG#-eMNDqZ|&R.iZ8,|^OǴǶd9 >/sd#4^y([Ιi<ŠlbQ%T[%< &xy "Kw<}Tyi\ӣ.qMOô". ӞeӞcd[=Y S=YVJƏzyF4Ӫmj^;rKqH+e,n<{/ǁt|3B1]4Į#]]Vxߪ=lMiuώ.N6r/L{i-FK{%&~ \Flut !sdK }6"ݭ$ay`Ti^}F< PxO'p}[ 1 :nl-k.hk1F&?{z4`puU2sAtՁѬC}tpR+4`lG cM!YRrb2玸6*l(g(Zʶ(( lE.keuYtjYSOA!vb^}]h MQ\O!߻d*Hq2 V4Xj!$ŤO)*з,} q'Y z`AL!TВXb́39g8h4–o?;( S\rh0̒kYp h%}fO, r序J!XyxuuuY($.Y:N7Τ-BĉOJ| D'W-QFx"Ds:".qI笑9F,q d5i˅Zf?Hycy?.  e,'P p2i\O%크u{ڃP⎻g?}Вӛ LfPC?@Շ$Mco 9ʖGsk!ʩbH1h4XP2WpzPbR)ieɬ"Gt qYdNIj3q%s>SH2%>xBs/Q@5ƳuO Ag4KcҮDML]H*[I+^~ɋCIbTZUDc 'j5sOШnL,:;q9R~,ɶpHJ}'r^ͬ2iLN{rå]?[S_wKHFz(БM"N)],6@q1U@'擮59ƌ6fJ࿉١(J96g&E \qgrR=c5s{zX/62/TZTm %Uf ''aß5`0:ʞ=k:Id ZT&z!A*&e걥1`QHQܦ$5D#\4Kt)`le]͜Gxb׮6:ڮ׶k7j먅, iuH2EcV_%U6;V* z.!3[>-+m|@"dȁXtɐ-ApLE2yYET'kG+W3g?InqcyWx!颩k\ex f6x3`2'!Y`%] !GA.w3 7N $wY.B҉`L6bC(񸵅j#~H⤛Oc6J^/giIxdVD˄.eE1$тXg%aP@ŝcFǶPU[wF->~ܨ,r^F1[ׂ-KJ9`bc'MZN%XX{J-X-kkgx<@gFwٯ>;>@{X@t|&tqCc{S Sn~^fjz6ÈoKg`{Otl^:u4LˢWwOw7K\e>׸dUJ= ͿM.onq0_m^P;Υ` DGMv"vT$0*nXRoU.oJCuQ!!zirgءGѱL?QZKǭЋ2^|gC(YJ/Cͼhfk Ļȋ2Lښ`*GU(dY)48 Y Xȝ6dh00EZz셎IhJwKoiw7,˅x =ǬR A"v9q(mEcvߢXv?2뤔H1w9s y.š['/;TZ=Вa׊_=i0XmCZAZ@94FShjMMi454FShj-¼i454FShjMMi44FShj#654FMMi454FS8Gii)-ضۖbRl[mK}#GI"-Ŷ!bRl[mKm)-Ŷ! -mўdk1Xl-jw*d-p~쒭J)dlڙ"jaM9܍Mx(-S5>JCG^ݲ]s9 PFA€ʐcYF$pZx[Lf=U/{cؘdYcihLNHbc8:(A{nr  *p)ST 肷8~ړp>{ {&=mq9̎b փr_/IZDc]2P i΄ߒSo}}Ƚp@g1*0(Agd'EQGdW !F0)3J9 +w)b).r\AAP;9 qIn9 + {rg;nK@~j|oD>W?9Ԋ+'lN$UUGjs2&xCtBk2(1ev'Ky~7='^^r)֜" |́PRݧ-N9c< m+|m\Kw(aE5J=0Q㏹^.|tcCGBG`(' !2CvkRY gv^IJ: ۬_"@1+`ޡl1Y4AENL;$wa B )1CoJ{_ <,o2t%ft np|:',%Bz8 bb=2X}2)daTR:{JG^VOKx;);ey4 dat}=A f7Z`h@_u:0 ;C5gbߝO Jo=Oru ]/u7JN]yz/5/F)P_%x%.w\1u|㏏[]tסљУ9l%@Zj[KmkH@;odJ!F#XɁ$2 cq\6 0R+:'\Y[,`evMUxҴ(\ieduץ+;yOfпfjQKnzu85޶YS9OZvrQ`nW=ޟzk9ߠ{-Whŧ+B'{0=睯x[&]?L|zh.ntu"6wm7? "3/}ySE-G_fmyLk?::uܲ(ȌjL2uytL%rn-OR䤞A Q'B@aEιy%nHKWVQK.ML՝؜gk+v_"KSfRvQ9!HK{-O6=im?݈=V6i4GƳ @%SR8IE&)l:'L<#X9lo>d\DR^eUiRrO1`\C0a>ĈyJLɇ vGKwOj~?}Eer~̟I*.;+(cgFYnrO3O/bK]O<=>Zy"`t 1%RluB`-k)p݇ASi.'q~`q>k@U,}FuH9b#r#+~IP~^{.qrlSGlbR$(r(lͫU]]E<C;界` ƤƱOBOD a]52.h8$0QEM[v=0l/5ēWܨξuf\IJbĩ{6pxMɍMF!r՚=mS7 'V +cs}}>z;/Ŝr1聐qOg p5Uݨzp$U1³֚Z+x~\r7.٤No׌V}Ѽ,\#&W/7ŗYow/NAI&9G .t: [ *8b )BBpش88޳}Mk6k`(ݟ z > w^dH`/9h ȇ@=Vkvg8|&ӫ-_mEI]fFd㋵E}yoӻ .{]u'V t<3">yϘ_ 䒁1[qN>7c?i3\1幈ԴgvTZ/8؟l,Ѳ3OyeFH sGy.ҠS $zdW:Xߞp =+>0>buЧg8*&>0oBq(ք@(qIY+iv c@ya~AFEEg!|c;~]v\$0I+؀˿esp2sQ5r8˨7N2yv8uL>[q@/+([dX宋>a܁qQZlh )H3;}j!|k.9GGg&/SήLHQ=X_hryTr<8X׭49.'x޵Q< JlOYxr3¯ƫ䢺p FA\A/ʱflEj_IO/`>08aմDm)՘˺fH}3thYރ 0: F0bZɇl7m.{:'ImꬓuURnt:LFRÑKA,yďpgr/dE ?ʏ.goN_}~uϟ~{3hg߾QF`RT֐ ښ߶#oAG4jڛ5M[iLu5+rMVmB(2j_.t^byݟtjUK+ ,gDA?V,*D ߤݸ^ڑFZF;8Mv^$Q$ŀByT"2aXL @dk(=aB|o|l2{~jHr 9$g<;pÒ9"a1 gl#g g;{uZ1r0{tf{gʵqM+NqMw6Yb$D"aM qk.k1s,&%<(E>ג)E 0Y+zD3Ji >'FG4P0vY4a#A2rVk/; #,l-.{T)1VE#}mä}" _5%~a+])~1ɒSיS1{r#\l/NٜX9Q5eOk--zHT7 Q17d 1L"V !k"}$ EM/:%ڻtN.oDK'qVKYIQ08D >(QR5E4zI%t4D{S(y̝vX-"F7zf!% 3Bs46ku*zymfBV t),~_Vͧ>x_}n7>AeSEg«qTUG[}G+׽ AOHx#97L [MTǵ0-SKUPiW_H#&r혏 F<NtH0A X7EUy4 ;H1$Ke%ixfUZXS,0#jZd[ kTuF1ZlM)5wF-XQ9"XR`%iX#a g}r<'.p~`5!1!kl–Ǯh-g,9\tg%0a?KB?=sXtMdΉ)nppJ-: ʠZۈ.eÍ3iKqb`) 580'`5*$<3@cLWL#,8ZI}qeI-}' \פ'HMt?_/1/հ-ƻ·PyN"r^{^͖ be^E9Fy"p6]Œͻ4E<avJ+b6g aT@Z։#)A\1As͕V8@熩d!jķxߝJٻ! |1TS90kJ]R;̄xG3e]hFF?MJMHXSgI?[ILo0sWQLHzS<ɇ#'wpFX{pztНnӯ``>;v؆JB(JzTP&R+`:j5I-3w?@tK_K[ ?\Hǂ|.p.\=쟩6<gf=ܻ1G Z_ٻRIafǏ;FЬDft414)+x upcv=椂R$U ?:^zԫ.BH4qV6OA߮!!EϾ<`@g y ]CE&y֐PICG!]C6z,Z`)ebڳx;:::hƆ2StTvy_&Mg1vf;VLjP,Y )C JR'?jw@boT&d:IQN)93Z"`&UnŲgp,;2cu{l5JvM"sfN*J!Y4`~8M"WChev}h h RW`FF]%r8`JT*ڪ8bۭgܿ5ZLzL8"&bF,G& EbwG&6,y\}@!p,DƳlՀZCD†I ^T )H] ,&\r80, V]u% U"X˃QW@PU}WWJ[uՕK~@ &Jv(*Q~1Q)Zt2ՕBzy8ʾJd'iqS(/]va*aF +)~z(."N:W;> )9ZcJLmS[-`j 0^@.\4C*" l 0mS[-`j 0 dZƶLmS[-`j 0ZrcRp!&(I"WLtV}CLTjن!0D-Z84OEDIF{}b="6Z宧<9z#d#٪y6ڒV`-b,`51TA{<ՑR{eFDEbZHXȊp) ;F+J:Zo"='T&Zc]q:\3mRZWⰐyM>&:fzhfﶼ)G%t!)H.ra19g<)J|nuSbgfO$ߥ^F56T[39<9~S^xխR;.MއiݝJ|]ٹŹ6a4ئ8;du(难XxL…$tcyhi*@NR?Mq:"D%Cb:p]%=;IcҚr_:bAM OJ)zG'rDohu"GwV8HۛMVզJRdP;X݈/,gO8&bAd< ȶAg9Ζ3Jh-U5!Ef If#{w19<$$ W^F532VzFM'%ڹ,%I#t(症U>B%XS{BlrgIy}M݉&gm Ktٚ%t<]Ý=|7;.[|5U$6ɳȅ;}P4bԶ8:ѼJ@8B'uIUc]eL̂JVPl0>:Rq.eR%c鬗J5YX2,,ܩ,QY 7Oi e\5.Û?oиpm8x,M\YT(ğ:+!I# zߤPB`r"X B$U%0"#mCQ\&El  ¨FIc !e,MgQbԮFJm[Yj^jvū@8AU I: l$ќdJKș Ri$dȑ &Y 8ɜYFHN4Ym:aɠs E"VCD\E T0.u ANx :sAVRm{H>)'۬4Q#"g|`R`H&Ҥpc(Ij5iWI.NYJPֹf/f, Dq$1$h -Q֢50S(^@BbŝCըc[y(+-;aW[Y{Fnp_rr֍bK}\'G%*58&"ҾIÑ<t2A|㝇eU`m̼7N|InO^Y}1hɳp"Z!IYTfH =#\AWjBHgsAH,:kk3r1+D5Ƅ Љf"CZYJC_e`%\x3%jZ=W|t5cR"f+OEGRiQ1~wvdD'NWy8I5Q;Hxf!|ΪƺBjXR6^W5ۄt[/$duA9a 灡2w< 4}Q1E QJuL}]:u!+(YW _iqR bŐ34[O*/딕jc3S"7*,Q "` :kVsᔄ)ގik\P,Z% W qS,NHI3NUO^! wwA:ޜJ9l4 \̳I]Ō KgFe3XM߹#}jf!؝77vߍhһ; o:qIݺW kjᎧ/[lFx_uFm~ɝtN&uCwGN]0xI_=_v;8J`YQۍz#ZF-EЌ%:{K y 1 ]kwYɸW<w[} ^FxIA -:e1%N8f+I3M󃩈\6+?!sKE\xGpQJ$>kO\εr1 K`m죩E?Ɨ\x|4߄H(};V6_+FgS㴔-EM/G9E-RwˡJmoUY~FžBHB(6b tx2[@T}7Z\=]s>th$G)xu_jd3Hc ˅lTF(XH:iWQN6沃ahp朖a2+)s Y$~dE|pb!wtqA|g_|F(y6քo1WVʻ :hu,7&*zU^LKMսf. vzKoݏTLhޭO_|5e|c2 &'/53]2zSHGoM(i]+T@0 $2Ա9yq]\G2q2~%L*ba}%$}^7>{nd7ߟQ)\(ɥU)IEJ眴"1уy0o^In5Ud v)`?0N-{Roߐvdfɛ4x [h< 17Rp'HFF&-8 NWqĂ*,X0H,j0Ce+-s18enX*ry:d#^ XZW[ԅx;NV\-.}jޕ57n$鿂;TG֥~Y;YǼ::8Ml,H)ѭb!QUf"*=r`ehiG/)e1Zf;@LRdJ=2Sih-9*NL\!`+>Bj%?xT+Р*%,U&WcLaWH&7Wf_bȃDpQL0o }ʓ3'b֙s|:_2}#3oŲ#{=Κ$&HKcn7Mn'}tV ˹dy9LV1-.ⴾV< Aa+iANNI- .խ>8_@"2-ة;⢓?f|TqKK7W+|FyRNYǽs JK>z8Vi_WջHvJ F)Z|j4o 6vKrvS2;$31k-ey&HUCe)!ǓqRj]UC*2ߢ& #+$5cL2WH%,iWoJz4pJZE*S;-•Fk\#+GW\n2B:\e* *Rr;ǯw5+uyRZ6{ޤvhAAĹߺ1*lh%e%PN(M9lhzCQUlO8jNr:٣a_F{@{ .;"} .kJIXG8{S猜7'Sx"KI GmH lF-WEΤͨuX/4?fc3i[z蒪 kn|/kܰ-.Y&(;I0bSwuuEDz:y>X- WKK6Zb)^+˿;,zNUjWOqqm?TkCP Aao4 Ңr SA .[H2 JXF"j^%tֱ$'LdKx$CIْ = ^ gs˃qX(גx&=eJ#(ʫvK\r!(CIBdP IQ9/SR.9O'GW}#}[C^=+| $QQ2%a\`%EX\H-']J*;tQF(`㯫JfaULJ-`:jXXCi(axЊ- Zq#WV [pZ5tXm0b0Nˆ@s>u^(Q"%+PtRFuڱ*Ficp@ :g(@,J@mrpk1ٲ9 [5WP} F?^ (EM%h,1H%m\9 %$W)\/ QvZQ3% E4eD1.@)EoU[<-b\9ЊFsvRn7ok|mo^8޼f4쭂 o yrZSמ1A$c$`=XVEh( *Q(D DGڵ鍦HhG$AKg\p^CM=(nϜ5r9qqRuHָdK\d-:pŵߐ#'YЄYjbh4J܃1`4G#L0PA^pP5Exs {|d3IoQ?o> u/boyjg/=[#_FD`o'L"%ܕ`, Gm E@FVUa9÷bP7,y^y2[>J\7jƃnͷJ\Ѷ(W[e\J4w.a>Bꠘ|R~!J;-EP{_ea0:EL${N~E &xke0ޕ 5NTzS)iЙ,o3'~3,VlOVVXŃѤW QuU x(Ci4IE@prΉ?nyd9TsfŴ>f>O^-kI 8ԕ#SY?)}:{8lq9'&]Yo{njVG΃AOny|;^ f^WW1[S<(x֓?1qm _ّqNgχ֮}|ZS^>jc3rlsLzWeIUAp;6.WVq[pWu!M8X`꨾R <S^Ha. >BH࡜Gs}&!AX./&7|UL>F#3XU]򕖬lQTTMR\bRiJ')99.Lv]\8l;Y (ךY +vdn6b1ZB&=yvu7i^9ߨ33(=2֠"8U1l,#R;&C>2ޘS|phFboK/LSա s+^8fh;Fzgl9jѡ%{BLGId (X!..:܍NqZS!A{A;`90撯R X%ܓ$""@a"2v*! jR!e(w*G9KmkA.CBq@9[/o jE*Or_MQ~Y`C >xr^{nځ':i7hmp'PhJT f) rU<p< z:ymUQy d5DXD3(:WXv[ƈךt du~{U:zg/G/IR{*B1jjSp 2,4A$yԷ&Cz~ B_ ]b6?P2Q)Em[v=4(D/YA uhkW,7R(I+ʵySvF`nl&@33[{H@i8nIPwggi8Ymx2Qgp|SF9qG/%l1ez{kfIm'n9/tO:gѸV7D{&\[XF <[ٞؤ 6g4̡O1X3mKc[Frs-Vu7;{VKÌxDH&wJ!ԹDmd qGhX5ъkbr`4Ҵ>wox_7M2Հ OIs?b'B1irb›jRL`#^z}('qz=:|mƶ-KY-K-Ԯ^xXuuEJÔҮ U1~V9A, fK K'/6jRS `L :P.J-K+S[yi]<[Ѳ4[-Z%DBFr)'rr{ 9I5 ]R>t+qQ'Z=(e OwmqF4%`\Eɢ)D!0],h4up`)8ښ䭥d8xr( !) h@EeY {q_Y1| Xu,+E:'JmFI.*DcپJ!}%`鮾pQpcG7׋}npt/K[EqDG/Ȭ7  b9j.dC 3tܑ]v@5&]F4[ۭh|#PQ%gUEjQNBTjL8pm hB>,*f|%%2I SU1:_9@סO#nt(x-kg^tOlg#l_b4zƕ-8==;@)7! K`(K*.9j#G9]řX}6jrbgPFl!HJYkQg(McF|B.V!L|䘀m VKA0\ƙM׉^ zAǷȭ1᧷znsU+Ժ1#^}!?8^= i_L2*n|ѬッO`3{TgB29 ]t:;+JacG0VsaG$r%i98_UV cKۭ[ .z^7w]8?_6]=R/}EiG6U6DPb@WI) *2[xl[JoRkorR\cvN)/1a` %x# DO8]V,ʨ(,£~/m~n͖E tr1=۟Vqo'e7W_+߅yQyr_f3y_3brݛLZy7/毳OYO]Feqk/{DhߴɃ;C{IP3Ͻzlϻkχ{"/Yz 3.X|岭SoaG2POjnv0Ft` ƈUIt+e[mv@]賻u.#M/SWfh*iғwB`e#[=>σb,"쩴Ե4"f}ڒRsvŜpVVuГRQ 2+RP)ȚjiC}8p2E! ki2q;sD*<ꆸ6Q2MyIKzjww!>=WszzQs1_Mt4)]&֢ 4oX)RK& !feA;dN e uu|yn +grhA_d@UI":Zj$rNw_y3TK*!w{kV—m?s>@u KNǬ쥎ka$qme "yv44m%8d"/-`A;N^L &!!XvOA`5 .^D20L dW9s%!k^.^gQ-%kHEIo⥌ٺ~L%ok'.{آI<e[pދ7Nډr69W-$3ҧ٪so߶3?#ՓS_t-_g=o"Op&i+K{ )PAGgWi/xyL^9onHY5! xKl&ZsEn, 2M.>4.*y3{/WOG!< YԽT>O\,WIVw f}{b|Ӣ8-Wi/m.r_NM>˩RN>\A9{?6'cFu~\œ;?_nm */9 Ng`q_Aa0\܃2DfH}èan:,hd*6K~4;Zӫ1ӳ&T nu5ɾQ괡%=m16Ij_}~*Nډ"hpC-[ Oue$M˟~û/?~?݇ND7g%hX $qG{ Z[ Ր^.6!3#jsv.m\~?fij+?zΣ?-te5iJ YGlE$-bNnR]t*C<  Тq7%Z[[}h~euw\ѓ8EEF5gq"صM$_g?qXn.>ޥ36oR(.(9SIÔALv\FĒM!V z7v TxA-ï'Tۓl&_ˆ}eY 6^ܡ5EE*D\Qy\X)Mu0"\2P%]+TjщQ<11Oh~P^IUBU9P3EASfm3TkEU1(&jxF9o$vyBzPtJ ]TPp+kePj2a+"Ǜ ~\uXgLӄq 7؜l k/F1/YfU̪HN1ƀ/08BL^D}ia;PG^k:J{i2dpҫhR0 Rn dPg r7V[/5~ajg$bDN W$2+BiΏ,Ѵq{ ȋ ":t@2GMH ѶS/ Vg'ESY_~(yYQ"KMxYI N;>n7n^wy2u1qWCj,J0 w@܆=pi/&9 HT&o%!jUW@ @v3 hwp!Yb#H3V5漞3b- FyxVYK;n,L$frP% $`?Aj؁0"ep%5b,p0",0wϢb!t",U@k6֜ß?eUH'Y gi&0 ֨ `f)4ͫA(U9u[zˬ Eae5+In$Lzt`_5jI޲0!K-hL79Hc<_ح0:f]}Te~rsMc&AKL,ԵW@7]pD3 L[ oi ,E4hky.j #e`09q=)NFlih'3wI x XRcF6JtclrI5LT@=BBH>"Kk=xP@z=ji^#úEc4B_a+@1\ƒr\ \1DNa~;?i/x,ӔS! !-JFJQ5zX2I )b8eۆsg?YxdK0 ddK\ilY닚Ɛl9Ȗ>ymyM'x!؄᪙kS WZg{jVzM9+߾p:,;57ּFyWښ0|w;;]?0z+-_1y 99Z.MA,3~~;.:4e43/Yw&%7_r/HۂśvI2VoOrզ#ԋ&}ayOyv~_aٯڪV])V| Uo~]!]>8@R>1$3Wهȷ<@ʋ-2'|G v:.6v:B TZs8i*AG z\#l|Qޫ3봍S.Y;SXS*d1q'?-zvVcG0GdG7jr͗+m~L'u~P.&.>&iy>v7}]r>}iuxh<}]:tո[:>nRMT ֻ KN$kfmBbf &RtnQ? ek>9/sIZö{a "tw)Bѱ~ЯGEa/jrOs`ÏlU/NߕnwE?}}X]ܝuM7W%!/WP*!h\FQСATjmL9DaEKnK.K/.n;l=>}V;h.k/Ve'^I|mIy(-o1=w?!F 0bœaZq0bϐ&˛% [cDp*w٧)THxRJ&6Llae|$Ay2Yr.^R*rp2 PrϬj *%eS7!̜pOֱkl"vq6ˏ]Z~φbyjz:Hޑ5cly@i,|x׋ӳOl^V[9ku]^s0(h Couc8:hr2G-;o=}{m;=bOọ M>}_C7t<,4CAnro|K<mmd9-@9qܨ0D%)Ml$L|tLW>{4̆WG{u/c/؎$xc0}eBh-)E᲋s'n(Y/ѫl$6I+S'LLf΁e L>)=[b3?@F Cln?j?mv s@h~m!-u*p0mxd!@KfjH,vhTd*)I~y;6(0<@`p0n\ȵX }U]YU7VGC`l ڙ!}/?d4kC]ƎtivdzEч~XGBf^&/.%>z}HJm_I>I糳=n޳ z&˭'֌nt9{?LyMT-)}9@KuÕ4nMP'Mi;1Eŀ*"w #IuxيhYj/cY>%::|lXy(G^?'D)lLR^=8ۉ3rK ܿFp6}wGЉ!U ϶-Yj+7&$&ozK:CY? zS}ӭgF-^}^ y2߸TBf#;-6ʼn9qB\cB"@퐭/@I*lcJ]4oĿ8?s_;+\ꮽYTJq~l;=d=t.B:FS%iE*Ydٕ][oG+_`1R_ !xOl.pWkEj!%J")jI<~%p2UW]] 9؟{3y{r`۱ճ>?'?z`ryF0VX o1 QG%hRFRu"%0c2d8bԮU +PlJȱ_ý~չڙn-=w}"jQX|)+?m}g_Кoil 9x)\9~KVŞ୻ډ>zN{>9-&\R\6f$y[ 8;=&N^HkX_k[W˛Mŧ?I. ʴHmrNu!d@!BUw^HuhZ |_qVخz|'7VHQ1Il z.JoBcؗ ); F:9=AC(eeCYƩ5^TH.zLyZc<%bj A`$@8 #IpW(ꦋ#D<0-qdzɐap FbQft_Y=r0\6MrkRֽ64|9R*59[^ /4fx!˨ Z%^-Xji6.=z"HR[GFp:;=?lMp?> `XcM0<#/GGM}zÅ62΄X&@$R1$h-LJԭc6rĀʡ"WL*:#8PځV ׭Zw)l, x&X9E,oOқ'=W,~Ue'«Z =vሖy>q4e:A4ՀѬ񆻆` ˥Halc{0VbBRᲲb2)"Fr։,`%i97Baq9e11iW>bfGۜ|3//dܠ7`9LŠV{Kϸ4I1RVL*зWUu=zgudv%Yr5Z:=A`&l*hI,1\e ֝5x%zz|:؏kVz B-+zfDgq~cz;mD.7I(JZS*,[ 2(|x,Z.uQCΥ` De,-FKQ9ڃKBOM7وpf&xR{);of$4ݵ]Pw+@^ZW8J$5+œȅ8!hWQ+-Dy|=Z0I ?&zVY/Q%]̀jg@)O ~?i}( 96hܴN0` _ҳwi,)U]j4mhlf[i=1r֑%NFia.ڠ;Ѡ9I\S9*ǛK.l㕷MJxb$k/#n`=Czv.iS*DtYb"ιd'VZ42>vk%ݭ?~Mz,57~~:zsͧm9oODs-@\jczbTt=xg3sZ),7\D /%׿ 9c6KAc4yBo;Q䪛L{b(1ܥą`Q%ۣ :RcʖpO9Zvlr!{M2jjd48,=V\׺EeۃUn6p||tּAhx6$LJC!Y$L*eJ1M ܁d> DEPKN\$fP;͗ Jat(I$ܠ0mF PܐF$WBڴTE EE@4om.T uJmuN8&P&N+B4.2'TQ[-I<^D_K}KV՚QA@RTP0kzC i B̡"R`$0ٰ1ZI+ltB&+#s, Z,PjLg4 ?<.z\(va{bsexDJKJ9&$oe]Tn*0=q!}`v^t,L|{IdqO# 8(t&8㒉vh)*4tFF l3"IVsK%fW&0iuwVݍ4!a{C4~1; Kp)N\kHD3\^__H2J% lu5x͡HNN3Z+h icP0W4{^vE]OfG,Ѣh;s~h[i/vQD6l-9rؚ$Sk[u͈lв#Ei}Oow?|Ͽᇷ>P͇w?@ ,y {wo뀺Gn4_o~CӶVޢiE\|·iWvoؗm 0\I~?i(G~j^8+mRnoh8#^z!.0n@`_ ݾ^:1+6r$[}DD-)sBA)QpoDR4ŝ xi]s嬡`&2=Qzt5O>69WG8DmfǍ\2NFz*IN9(.̴<ŠXł[oTؙ=wmP] d⊳mjLpcǶ.9s=?kx6ڠK( \Շ\>Iȥ{]E\X>g<BFXiA"ϲB%9^ 2 4o± )0IW⼮B5QkP6oepԠuV"cZ{̜gƈ1`Ϩ++HUAT2OaD6z}I t!6lS 0 r IEs,,%HqgO4 ;.$ Oȏ ZXp%u"‡6m1&߹iml]3 Q-v2v& d/w;n{w=c~2yFM *Ta&|„,n uc4҅$M!-yy^Flj}m="feUd5iv!ܴ)2aU}UGlVzVJ018- Yd.s.{G*YlI6=JG+{u~U$!@H,EЬrpNo匃z#Pu+3fh5i@&2J][LNxR#W0Ԯ^;+jx&XJcj''˪NP ea?k1dZ$6_Ivk<&Ӆh~؊S}Ϭ&:~c,riFІH14d $: Z48ŭKehr{VjĘ%dzR(L֋93)jNs.muvXT$cW[mm‹ 2>\ݥb4 ƍF/W{drmMO57RRr\6ZJHN AdŖ(Vd*dEBb6% ^C4F1lsH-vnd(Xjұն6VG$^G-cYHőd"fTW 9fJA=ș fkVڸQ"!bA&C&&H&)i|*@:W Tn[R_"Nx/|EE-&R*.]*[!plpgN`2'!!RCC]No3 7N $Y.0btC(>jWh՝-OjOvq̕k|դdG(*:w>Uů./%K<2pܦ4D-%*Z( 3rfH@jz vq_a5pG5=qFn vs6+tz{\DяC&A-OfEIF4/鴁X8l`5ػ޶r$W69Y,^*G-V@l1"IUkWOTQ!*iBHX 3x畩8^hdp&yu53Yý@ڽx7o#y#YG$x"WvoH𚸠IkaI𚔎F{oOҴ6,I2.Yb}Ko}> 5t[a1 >~Cƿo`]dЄV,j'Zwb`YBU3ov_Z{ۈui}wh7Mz8Ϲf= }-(Lɕ& L0d#{&V.٢I.3r=^I\w\-gN|t3.ϔq!vؔۙ,58PVD_Nm^~/{ak3sɍ9{z{K3^ixuƁ3ǏG:=z{Z[CWk{-Q$6b(ug;-RF2#]?a`0wL躲CevQmN~2~X賫1wMQA=$7iXZN}3_u2/􁞜 n՗Kį/rCQ?o~|ן~l?_Oo~ͫ|O?IV`QR; S"i'/z -] m>`h =g}q?3ٱ h޺4 c~ףKE O;t5%*>jY䋏bǫGgtKv[*x=t!A04K1M4e]fbB g-?YBޘ. ҲVl81-?J_=pK96smWI;b$<[Rl׭vFl)k]rr)rdUx`,ޟJn:kĉYzηAx5V˴W5LSdtj;5L̛/ў(x+/久ұi׫$v;[:~sU<4.bT-GUYuPAv"sU\U<UQUc_Uт{U6 e @b+j CЉ| [Rk"sr9d\Con|ui,YU]Ć9Rf_E_JI%Q)J+0(Q#h,0yY%ҳg0 ~ JbjP'c|Hbn *MKBk eӓbWdH:*fh鸻3cbsghpY!ba1?i:pDM[. Q+GҟFWb`KxSU!ktU0Ej!Set/sFQf}y k:l em% *#qƈ6sRE,y {~+Kl4ݨTVൗT -5kMv:m3$7 ^1#^}&[qi wHs4)sCϝӡ,ߟwnr J0ؘީr9QMc=FcԽQ %ہ b0WԪjMJu)$ Q ng luoow^%xq:?s!>rabV&NC|8\|9,eT q"le YL,R|eSA>Ղet[9p$WHkGu,c(dS *5$D=MD1TlĹ߂o''򵽵1ub#aϔ`QmNɴ\-[~5!XF - o;GZ0ب l XP2~[F+3>ⰥKp2G+QcP.IÔ1ZEMc\R5(kzyOt&c}\k!xnXMv`x*Z"g}PxU Z5Jq2G.rY+js:+ CE'S\TrP $'4r89WUػ8eP@ΏڕѮxn+F8zĻ4`k%CIJQa Ų1Z,W-Ac ƚACߘVyظX4dP)`qE`)!%^cjn ]7Ln߉_\^{0 j*JИh4HhMB"`3F^/n/??|ds .j&4xGޛ ܋c},X]pn~|Gq@6leEnQK8LlFLd0*82 OvI1X[-vG0>f'>yf䭮dM&v*B>HT*;K5$qFi% ..;X!d3ֲvC TҬ)*f+SQK/q.]*~$)9L4tq?ܝo8>ܻHq {E} X[u?߅8e1|=h ;T`aA@(:"5'@/|[zo*roy ߇ݠv##ɿO=cĸَH뙐AKuMpG'7TۗW滜Ƚe_ZR%&ϓb,j)&J>p& ۺZ|}ݼڀC@gT@Al#bt|KžiJ[ Cuyzvfz$UI.m:zh7i&x8ϹoJ= }-(@Ι\ lҠC6Whb-2Sٻ6$W|Pud]ˎ3uoPI҄DKhA4.#LK.fgWge}GlCk3g7&amgt3 QG°=0Ȯd\:n=0K]Go#Q3=rUy7F֌B'*CJv%;5-Y;.zPMЁM`M 5 ̈́%M>H@e dD'}:4ET2iBAL) L /yO m6f u⨤Ln'm%Ai ;% ?* Q9ژ,Iy6c~U0 %܄lU uRQfu9'c*E0Cf s-}66_Im*ފ-C6ԃ"P?f 1mcxC ROxzH!@m-j'@&4pJq+`d҂ 8h0^% V)8 ZPpӧJK] N%%(ϕ݂ <2;Ϗ_X58>V!Ӄ]'pP>#ql;t=##^H+DT3#NAyǛ wŋesz 2S@JI0ye 3%>:KQh<([h#`aoqež_sfn7鯓 ?"ʧQ&opћ.*F)Y,M -ebYl2x);@p!Qᅡ r`4>={9rӗApx8 sU[_>S_f⟓G {43.rO(ǘN+??G:ئxF`R._I [!>"ESEp[4T#Y~ZoS b8b0Nʸ[t %\eFFX ]9 e OI;O'+@d, I:}gL ţDCȣ8(hH[\ZD}XŊ5 Vmofp؜ɖRJ,zsx}j|ך ^JdX㑗֧.okbW_^]~c/F\p]T1R2i-^*# @ b̂OiPEYgv3߄/uU=Ut˨ qϑ5zW%<4y[6y$85M*s;G4[$ɛnl#*⓾(/]7]QEhY}%3&7 O&NϐMaKF l)r~XU=T!nU8o۪9^EtWTV^<_ kZˡQzn$u r|1ay53Ftgjn>uCSS|r;g$i؊ j DLbݗS[N90цǠ#2ߨaap~@vRk-fAYnC6(ch e]03s3=h-S uV1/Կų,fL.E*uvR@@-@KcJ\:ᘋ6Uv^8Q 5Xp!.򼟏whsb҈-“Ug\"ቡTJ%Trg$w}7HDlFC8jL%`R ITU Ne%'+JԲ_4c{~/{qF2:Qf%e!d jмGi4gW#'lZN_K[ܬ:EsPe*K2K4#%xT.eK̘\k-tƈI^b.o/S-z$ەJw5)]4Tᖠ]{{JW{:B^gXzOܹgYx0uo(fԠY9MxZpq Fף=/hC5 P\*%bsVap٧(A 6x&9sq>q ڿۺX7v&̑Lޜxe˧4BizW v2db1Jq+`d҂΢a<Ԕ|`)h7v臿?~̀A|GLB%=pVpo޸\"HHK⭺žb|.9PYe2|êj[oe\7MUF'Җ-~B޿9Jhx-\r{N H|Wq^CBcY@Gw Io+һg{^d{RhZ-crpr']xwXX'6IUELv_a?WmP;Dp*2E)g#:Ts eȻonӧSQPїVv c(Ʈxr;u%K[MPQQ>c\qM{ <:12G/oj/T)_x:K|r{Vh=tZ^=vY=iy3M{mNj?= Փp\Ʒe Ⱦ}Z=Ï-=ڤwqP9~{~gsѠnk([p= Ok]]v.sv\ґ:b| 9kj[#{6v!؝H>GMWa߇Юw]_nfm57xSR6x0Mn\Xo2?K:5IMxgI@e騌}.]M 1mMsԮ{^Sƌ7 ,2o*<٬|lINwOY9g?/3C*/""cW5tiw. 7g2K/G׽:|+DP_k4V%vؓ5b,"1G:ԫ COƺ*˽d۽`OVwQaGp0 N'Zݤ6CˆZLu6j,;Eq`*:oǎl?z5<-3Am$H{BLj7Q}7͸[ 6MLR=Tn"w4h ߉Ǚ!CjB:p;Fϛ{k-_f"0PwC 5Կ<~܅w^[NF,!Hl1Y ~s}eR{HC P[y~-y? g71 $P:9I R3euP#'K";ꢍ,yTݹ~# 鲘tv@ךb%;[agLoPd-“U_qF5E)g2Bi0FLe>|Vq3pC(@}j#. %~*Ba)J1q$Ii'`Ԓ`4cGJFq-GH !D4Q✖щeRRf92+Oa^Qb~}z`/J{õ'#1vIQVɘc'C"7oДPA2ꨔMN"IBuV֢*5M95{ [eo:s,Nw}"ݕ|o?>VHR Z!/+Fξ Rp+oY"IP,&l:$Bp@d[F]XuPC OYP )f3YyLs!PG7#Cb,#$8B:Qć6@\YRM: U y =aAkU!< ,FܚP l1t'2i\ƴmǃwC~,pǢ>TB !(AjLg4x_k$VKQcCg9 @}@ս]fiҁ^aR7=@la .j@ =h୍c5h!7}Sfnzӑ0/Wл7FA,`~}aos3%8V\9޴sq8<^BwׯYnz{xAjA !*%  N|dp!$Η:~H&P;Zb`ZZrD0-NPDB9!jpVQ u˓ n_~Pt-63N)[Jks=OGq-x=^YgQ)Zs~ӠyU-wsd(­6q6-op14ĉlI=]7uÚ21pT(<$ ٿO N^|M6Us)t`#c8g_-Fե|K ?q/op;闿xwo//.^}]\7߽FVB k A`~~|qm[]c;tMY'|~UCn nBd ?~zO>*\܅ܠ~6)ӛLƳg^DUX…sO6㟙ʱFPUFAt;(dd8.i-cD3.1*/ך2F 9T1FX٣FVV=}rއD=t|[g (Zזt̒b֛R&Plj9q|7[#<[)'ƔF $2͎ J8Iv+S{v11qP@KuLֳl1) !*R)\B!նdl%c{X5Y8TedQeႢ<,#R2?]s9rWXHR?$K%U\!vSL}áDJ|JCfk^r@i曯$4I~/AяAIt&ZlH&ORܠw.kMA@b JT=@+Zب3vXlBrHmZFٌ~4s*池vk\Զ-P`qJcYQŐ#$}D4'3ٲ##%doxLr,vI$r$-:@҄51D&(Sqd,# T'Na9VCӸ_ҚǂDZ Cu/Lo ]2[䴳 99AVrzQ>[SI7N۬4Q3>`0Db $i581$]mCi-*_%z՞pqR5lq3qQl|foԭ̈hpܦ,Ii 8Z(!h k$ !m^pX5x.ʖ=@}y&>#J8qড়v&y?c)Kp71%+ U QK*Z-:+HXzG۹>$3i, "Ϭ} bВg "8+qڐڤXYTfqFH.[j$H,:kk3r1+D sk YxNZiY}9(tLXrJk'^M3on7K]+VZ䀘g i.+X~v+>V(#N`'?\`RP v* vhe-D&dȵN҆WD% •F)lP_-*Z8"j-cBP\AL'WD0U!WSB6Wʹ#WE~BpUVW\}2`ֲc+$c7W)N `%U!WQUR"\is7sיQ@?-^H7\5Y/znB[&*=&/$\03+?ßzii~#+Na=/P_p<TI""~hް>q6L2LFxݨWb؍Jyd6;gz(ْ_`Tj&߻qIaKzX^HH[9|Qag. }fVܰ:ϴ<9?^JϕZDֈ{[|כsF|.Q]VI#}E%X`%L5Ʉl) CN`b0~B;Qh?6hv(ƉD@;+o7ҿOO?[t6mܯ>{pa&WnՅ/ZH(gB͌eլO]*m'>ԋݜ?? fB|Η~[}[OY"$+jS*KV[%p(oq88[&b, Bm.@I3x-5u,h."LVŹ *Ĉ:eFfKBma.3ϵS .-9 ǣ%4ߗGӃjkc/ X"IC{|Oė HJN*L(s:8B4hg>ZΙv7o+]R%V›oT Xb5m.v AL{+4eC%mnA!y̕udɸʓF& \t N༮OUB` ale.8j/I%fKE{Ce61xJt[3bVtpEfI!,=H kWam];p|EaR5\Wx8vHG[pr^RvPSVhARY%Y9ReUԉ@oZ舊&-K >p5.(E*I&2jgUT@p(B )}.YJ+RE" 'U87~|:%N8/T|*|t"+ ;,zn}*~De u~os#slAJ g0V_2Xy2xsb wp 6: ёdz1Yt&UB)2L[Γ:qՆ ?Xۥ*v/0qtuEgo ]7]ȵMFK-U7m.~13"\TշcԺKjz}ɆN۬7 ~jRˊZjl}C͝7=ϵ\hMk}ϽY;&֜Mt}.gmiNՙ_EuW2%7g-|X[oioTSl1AԽ_ia)5O*?X5kf;\SZ-2zt .LҭcI0m  8}DխZ& Q)PhSn|L.CP̅U¡H@&2q.3۲e5rL x]}(Mf$R&v/sxgN0ˉ_KJeވVP&)&g  ,ق>Bǖ=w:D!B再;`Dt:)bNFo)p,$X(]2d\aJPGL 7My~$.Kw{Zl+{L<Ȣ6[HJ P9KQ5h3S?}s5v)?z[\k4x`R|QF.<#.KpʷQv#2l٧.{]DL =Tfa&$]홄Fwغ嘖=t7 M=DBEV,V9[zwFm>7d&MDSQnY;N9цE-† mQlZxg}.Bu:l_S uV #2PKe-\ c1B2ZuvX`R euPyAe1%!8"{Ҳ|!_VtRuo,&&}Sŀԍ-ʅGU'HQ*OI]+EHZ"*B1+V77LFlCեF~3!~ФPYRP)8EM'jZ㎚آb{^~-{"F Μ2: Qf%e!C 8#F.fp1G7|,qWsAq jd,6!D̛sBV@]ʆ:)%A33fɸ1QjךT|X5R/ߐխsw l=u鮯*cTf;V?V/u*DXadR4$jRD@M qȤbj:=s,b˲ykl`шģR煒~h*}UqgK2|5 }ˣ]ϮW7:|Ff2Yd9LrL&CJ1MB169Ih9^/G:+:cy*UZK.j8:̄2X9״#Ȳ4Q~2 Ӵ1Ə">f*|YPZ[cDvJ4. #hz9ddT)GG OcWa0di+oH#MNHb fj- ^L28j9#Fz" 2bq^g*@ !u7j}_Oۦ>M[|Z"li.l7nkA{kk=?I(;O^`jU|^B8G AF绨!\ƚBttE~5b %@F6FJ.ESىL8kDM)6iZ7༪>hm姆\2N͑4*IN9(.̴<ŠXł7\x7vZuZ1`O LN㚖k45ݣW1La.c3/~ &bO3dkuB-$shh)Z"y^_]Ђߵ,4̼+SSj=|Pv=? &Qh9yzh@Kw=8Vngw6^9""2d0wz.z<̤>0x,rz 8$wd 7oOHؾ:z5yd_̇o^\ @TZV"> qDZT7DZ_Px]DcA bU0_.ב1-:$ ,xb%ɔe1)Qr$W GR19wGsaeK6Z22{òB%9b,eddQ d+8dF' .F˧4L龲.~!NOS' ?G_kԋpC55]Xܹ|Q&"~h8RM[ ԊR65J=hIm*aT*9l]7RՌWd^cKh@kJ $M `xF^rC5)`kks\>hvVXqɳZ{ cdfq/"IR`gH&Q<$Fkɭd2GPu섍1r9}`du&`REg}*P;p*R27,k β9/ŖЗBZiϝԕ{c)KBqM?T3u!cmfǮ*1z(}U㊾:/6xV:l )uY)͹e\wyEՋY]ՋRv+J_hm'V:/t8nh5/څj7FXi:k& ۻgϿZ3{q:_4W PVw'W ]oo8w1e8ѵ4Vzח.|n]a{yiQ)TF*5,`sZTAl̹eFQ逡v< ./{ F(+K%v)EVJճ~y<y]ZgWzƦ]_#@0HOFiW*([}&JB/ WVY(I+&ao`pkaq6 r.i(UR s2:EV#2ep'mhN\*ʈ`JZF V~P ׯIPCϓYW%Domyw UOa{g\S8S-Es u Y:C?/Gma-h{F!R*.%1UC4 99+<(-rƴmֆh@3>t"5E& ,iF:! 4g '^T&_gcR%/}[W7yHx> Y②Ȅ6eEFcH$тD%X %L.$iz^ /mP5̇[;@aˣk|LeF> 6xOa.я~Yive;jZ%+OMTb3Az e;/*ׁy}.ZVVbNa; b0g+EhfN3HffA(`Q+1K]#(j44բH !YJ7X Y X[D YxNF&hz)8+wYM\ '$4! Ÿc|V04粟V%zʶQO+նCύ+Ѧޤ-x] Q;EqoT#hI[th,vf$񮬟-c\?5ԝXUUW誠Utut :DWXxr*hL}J=]!])b++:DWX *p ]ԂrţЕ&S*p5t Z4m+BiX:JRV$@gaty9!IFۯ)\OG㓺յoPjaܭbO\_INRoKIܗ,Tw^ϿS'KwFBUtG}Gqcί vgj{_1ݭ(vv{vN)|c{l'}eّۑٖB;ɌDQEssȫ#IA\ě3|΁MK4)ʠ?.?(o- 5Ty~U-O|ދNR f7EjJFCv/h^!7dX4o| P3d?Po/?..އߦӂKm=jMٿFfՖy>>]ZfHWtƲLhoTtKj)P wi",3tJJqXJ*{:CXN9A`BDg*+tЊ/3&?KJB0"i׊ -.-8`yjozpOaC-T6Cٶ D =M1D'7+@0o;]%Dtut%.+QqiO`0O?5DTlZMgulZTj֕éjXUBo9z%D;,eg,;0GޢI(1-3h(јUUg*U+th Vmΐ% ULug*=u`=Rɞΐ8w8?LAJpIgUBZP*ҕJutj|KDW*eWW %Α$#Uhg u\_OW eiӞ·uie0qNpegZzu(i۶(tLw1wLhi]Jֻf%]IJ;KO_qHwɊcւZe'V rRlN֠iKsZ\0cUB+i*=]%]IkYzbI^rptƳ7H¥n~u+t+gB3lTy3%D|zp ]% &=]!]bcKSJp3 vJ(eOWHW\68@̺rJh5n;]JҶz: ] qUug*+tжF9ҕ0PuI]%; Ũ3S Kd:GR D:DW 0ǝWtƳ&Ut(eo %]iT;&jOpYgVޑ!,otuNteiօ 9⊰+.(Bj"v'*[] <}rSt=+t=e4z~icHiw6pQ0~LwEo\̑$j|NTEs@KG1"u,L)E2&MBXoќEC JCtɷaˮUBD*3+6Nm?]QQz vJ({:Kbw-]%;tpQ^EW -!mΐkӽZΐVƨ;C .Jh)k;]%mBЕ$?]LUO8#P'x ߾俿O+`YKOVGS7h]n3K'8ΩJr(_ |܁v2 x\djzoh 6Wۡ~TqH.\ӿ@qn~A\eR\mԆ~Xu-Ox; hG{vT緼[ <~M#_g 3|.Wj iok$KWүԂrUmkrw癅rޔ 'y|iv.P ]!\\lI'rbJ#68Rw;HysQ b`F>Y{~?BwTj>Bcc3Z*YWbvf4J2KדɧAbgR'5)Q.ϐ+JkU6RZv~=u/ȫwI5ú{DJy/m z Jyfָ ̷+ǶiRF,B0#`6G"`"RSFDD b FрG!eLDLsyj4K6E[ڮV -fwӉMk^+k)L@(%a4UTscS\6xΨjgoog7疷%K^/L\irD\`~2N'*}b*8(IF*dAJgp@9uJZq:0;o#Po_C(Mn_no$XOMWϮl g7JfM7pef W9v ix`,f(0p dV0b8sZ rʊGN WF3Is`GQsw2%BҨF:'ʉT]W"TvM93H;Xysv/ { &O]Y{utuby?_2B/ H &e\Z133#hD@3Ʉ@qZyh9w4&Nyuyh#2NT$NЫ^P IpYˆ .QuH9DŽRn&xnӨ#"( x1J))$< Ja{OZqB!}{~"d *lb}&bd󄯶* m>q"˦&mN~R \L&1HA)uL?{3\ g)3Hz>?yaTȹr{M&{[U<F,2&"㑇a]z3l{\t-2֗s:qNObQA .E8jF9!#8%٨)JۇH߰N+FHm+=1pFrnpP(YL -S 0F aFb61o3RAɻ 3))Oߍ(MW/8Now'XytԊ2P[2/A(c8F+,913O =4CA6 ԁDOWNrMZp-X[(Ug \T~zROm;h+S{_".ڌ= D 7>7|VBR.H,QpIB1m~1N_lr3HΝPn8Nf7a6&=s킰 h!rj h%k0EhP8:Kc p$W ׹K,LhNj"oӓWxѮT&7UkIu:x+TWP]l={XZVYY>,km =ߨܮlVG˵䚹팏~kx9lǃ5S6q|P)Ut5~tvyC~Sq۝M33NcD*?Nq8we/~xxSR|9ȻC(Lβk3Ni9pIW `:l.Ru *TZgi|H›~l&EDS2Gb*m镒@41ؼtƦ2r/mP8vk=Q3LڈM piqd2Cp3/L[)>OX?9| p<\H ‹ Z/9%Z:(t^pC 6^{-XG͐^j.~aq;}%4Nq766Q)bqI7,:(ٺA]zpL#C)F?O#O!9XŴb6q$1Y|nb3 0~:c*Lc3!"8C飦tDhw4;^ga9@1c^1{ÜQp4hA-)C%IYU(bV1l_ za}UguC/8*&>۸4'$9R& QAWI$ed#kuFsPY꾂|Fvݳ޻۩v=*quSoprW6 ϘG#ȗM;, \f2؝b2p|u%^.o[eܶխfzTXjK]MG{xV Vɧw͊;VVlv)t`9aIL)΋ L6 %"R Ep:#ϝrR} p)(օGw +kh$hYsĹaBӝ4GM%F_Id5cSMh> R./-Mܐȶ\- ݜ#b2!U "{IHOb*,H1sӽG-\VΕe{YHt9Iu čQ"i(t2N`҈Hhvp'>6&q.BYYtlYlyQʱ#Zi\w)L: ՊۓwWL~Ȋ 4MiLK4yH@q0 gh!T:pi:@ m"JY9X.0KjL`j.č4 pxGGGeѬ9vjt2v1UPΫ3bhd,ZXA: ҇즲D!نћ 1$6 J|Y!h)P>2I_<{m__ 䞘J؁ DڐJ,Ř|릵c} w7g+b57⎻u7gz8,Jcn QkcWZ'MF6["y o^޶r˛W%{704%t<9<L?d'O҃]D<&-ǕX=LJq!4H/4$Yާ@kXg"uI/7Wow W}cY胲>O>u{mrAjX[ҵWwmav:oi֪}2 p~<-XꬹLߙ'i @ =Re=Wٛ_T4s+Q&Wo@kH0]^Ŀ,HRZj)elr6*h`F: rGL/4} +&wu/]nt|ݑQ+uVj>jTt1Kq3y\>hwߐfX#weKI2Yd$߂6jQ CF`rF'-.: esmv.J~oalX}>IjF:Z-]yc-:Ǵ.R=uzG _&0-]'1=_}&*$mtWO]8j9tudϣKQOX#Zx\P4ހְ͡=iX @Y]V6Ș% `X@i^G=x$ńK[R2٫@B^L;:pI VfO`1l *hAVb́53qnky7Ǔx7z~& &%o+z,KG .7=28~T!85*/eR8~]$IrP'(UW֙+y70蒍kL:>DeIg)(BFtΥ(-hD}#N6θo dsk?+<(hz}ļ,m=+|OiԲ.n]la4XL)9;a2. 3Ê˨k˄$LO^A_vjߢDY/*}Hp>hH49d&6(EI`U|GKȊ'R^18-Hg5hYxyĹ TލG8{s IQd϶ wNoR cN*j4i@!~WP~!IOJbPQ*zc3qnWt\ciR{Xgryqtb%ػ&͉ I(ZaQw搡_ï>rq'cs<]N U/;E7OKvd#7&"8}\@JjecKENs^'NZ:%)2j49=+oNГB51[KNXY:"WJL 52v&W ;b!vG•Doya7YӰ~j7?OĈт%&k0 LBJ\6*er"XId;Eld`Ks3= k dZR6IsH#vg܎~2D4E$ ,i.pd(n]J;dxi(.яL ~\z=RW H\/9VuU'+8dF{]5[# +Є@p@hPk~@A|>YjC4pN@gj%(j阯f~z|iUtmzP8{}\c1f+x‰hf3Hf+ɢV6cEH0`t`hxI"[Q*b7X%,Rx2ʼF&2z+qn'WQhSoYvcnx=y6mnNUOiOBd9 ̥/=;R{>{ l3q?i 'q?)ޭ:u>l_cWcONV|᪈kHZfHmW/B^|664ּjFяsf|QApUU;Ϯezz1p \q \i{cpER*/4ל*7pUE/pK:\){zpeeF\*]UԾUVUr" >+Xqq_]nd7֝Ud%`00fӋfЃ* lszV׮V@ )S7<ɽYhw#D)*YGf֮&w>h:]M^#]Ev!ӡa̕Eg-oMO'5ʷٯDak =/_4g6Մx_j t5QFt*\QZO9Gj8ŋHه%~%S_Ԙ{^p7,5E֠OPjq-e3L76)8Mv=E$ 19zs^KF-LGʟ哛KH6ڲ&66RʣU#}`U4,Lno`&ڰ 0eZѼŠJ ֢ Wff;OWe$WHW,7t5:/t5u(^%]9gﮫ>]M1 ]M) ]m`t5Qt ORW }S&J]zt\ tO18ڽ)'Z&Jo^!]Eϑe RDP&bWHWɻ:fޟ;76=]R_']E#w62o;lQ[l$+^ v[")}OOOkyRODkњP['4+M?zг5;ѿ3JkGdz`ǜ wM(K%3s^  ixۈB$z{tж|98={vk6y/?7~m!<=]|G|p3ū~s=.#ل^>SZ{H]MDnϘ<_t}'ۯ+9u-}Šܡ>5 -;nY8E.A~b|}qk3=.Gӈ?z6GZmq3}I-z 3 c|d?w}_=?#=C|Æ*Ž9[0f %xD :`՞eaB #!s$ZGvivtaC9 _&THdF'> rM r Ac h**u(:CbKtDJe!pPBY& Y,kOe큐M2+VG=D &/ ԙz6/bA\R4W̆$$' 7+ z:3 !pBK.}G<MgM=x~6sYs?͍xx] ިa}Qu`6#&f3x:p2PM6W%sIJٷ6Xg OPDd2XP|ɪ`QBą򠷒!HDNkk*fʇ`%##`fK`%`偹Q &(!t_15r6B:;Bdm:UՏo>Xżs¶l ^ABH_kC6vz|W~7"d]r VLKе2"1wPrIn lŪuԆHTjPKB*q`$#(va6(ڽ,^5Kq[S@h[3Ԏ a,BN`[ X-&ft aہ08wFAH!(ƶDMJuPkV]ot,U{٧P55 j@eV<JP&ݼ֨JBX iorC6$E}>>F3KH07\. 6`^y-Nmo`|ަ=?:=7geff`.lf3S@ظ65doPpL8Y:Z:5W3،QgF9@[FmV+@9

H}@R> H}@R> H}@R> H}@R> H}@R^zq|@a1;J}@zHR> H}@R> H}@R> H}@R> H}@R> H}@R> V *6 pŹO@}@dX}@R> H}@R> H}@R> H}@R> H}@R> H}@R>Wbrdd|@Mqo|@9]M}@ > H}@R> H}@R> H}@R> H}@R> H}@R> H}@z->ow?o~8[MgݭW/wӏ'/:h?%uiolK@dmK@Wa[r}Jv:}8:ni;߶1t~R@8(SNQ/pqq + /uh2U(KO)b֟I. <Ӽ(@i\jmQ\buZJ%GŢl:tέMm ںEW}xwf0_n?eMh8 MîM̮ߚ9`ip+J܆VofEsŀ!޽_tejI_+ZxE-aX!8Zu’φW^mW'#a)<:>?DL:EðT|bK?.ۄAyyM'?ZrЩASyi3j-i7ÉIxN3>ZYK0XƔGB!X&(EXinDru[]fIsP%%צF 49Ba"0^ UZ-}kif*0fl J& /ck"/ck"/ckVSؚ~l%1%;n|u࿥.U]yցKmR{.g6zsRZ!{ 3^r}Gu,̽A,Q^Y"FDHAR;S YyW&R)mWǝfz4y_7oAu?tnLdz]'A U7SkzKqH iLuHm آ2獧4\X;S)G"f\Rǔ!X"}dD8DH ^0`P>VT2NS"Dq"UZ -SSF aѭ`l3rp[oOInQNɻ[u8Lmb{7iSwz-/UܶOq Od `yHKV`0{bS.Q ykR}Jp@K:(JCX@RD/Ɵ.r^d7nt˹tױOW𠈲Tb5f3 K!` MB7*ه٨̣_[4.Mv)+8܄<}/P ?23!P1w1Zb ^PmiLAL]]_p]M [Rh@Oη(WrKZ=٣{5xj! l}8lU޴m] |h%+$-]VޣLxںUzW~Zm5eՒ|%?#]3RխӴtM|SxLt8ho \5ibW =r֯7wR&-81U=_LR&O#;}*:2 ɹ]iwhVLwxo0x2f;nxd\1`eV{"3hۼuIL+6QӲdRɼӭ{eODt=6zlk4g\oWR:qYӄ\CfXYČyeFH sGy.ҠS)ߧnU(bV9)㨘Ԃl" 1zOH nLXFH"GX)/L,B]5{+rwu:a;vKo*Z-5krj~;v`5+$T΂#"_Asۭ9fdAQks=78j3sVǂ}42XruRCS Z$p:u ' CbFY/e㛗FvSr2k\"8&eG:T` )iYR딁d-ݙ;ߦ􊴘`M^ǝEصjmo߼}Rh \hrBΕeNF],vq:L׳ka6(4L )&8kC@NYi!h&)b(}5xg(3I@Y">ю|er?Ho,g Х1IJRw7тVqQr#sty XFC 8LJ=ʹ>g`gFИ"g GCɝ@O'6 ZG#2NT$N^Pጳ qm)Pʍ"ۤtdXDccrX+B"uFnt:_o&NtҋC/6>䁭+| &G>.B߅ F1{l΄b"C2^D%H '%u f$Ҕ.L@/'FeGF2(:o;cƌ`z-#cR1쌜=CcBlr? ^]L?UYkpGMY/k5Z h<+lҝ%nj.vH bx+5pRja*<;vme.J5=hKej߃GEk۳|GIB.W m孓a5on|W>uχzZ:$}rmײY9(gL?#LEWigL =:~ʨ'yR=ZNKms+0}QR&ʼ68>+͎5rNU\TF7ז+\(O} sg)e۴Cr{۪ j}Sv- ])ji|[>CY!':h痃Yn6.o3V~ۻIo%ymh`s7fdAj2"NEJ\8R+Ƒ`)O*osc Wǐ#lTZ:8>|\#,yl.:b:J +|:\"Le&-w:0:V1ZlMuz5wFm6*GT KIwT!;x#z8ycky_kkMQyJ0TEI+sKye[Ƣc\Ztc`j9:s˭#*2|NXV7#=1n0EkvX 7]6#$^_. IUp`NzjTHx3jg>1Ug^1Tk%iCH妺 |GaC_Jy(i]"A4@'P>˲aK>TmK_?btWNў1v}ƙcm#hZ>&֮ٞ^Jc2O ZL_K`ƊUvӶ鶇+A=!M;z2@Kͫ`Ft.h#1>g\sZYFqTAzSv}TC ZwWk_SJN!h TGJ#DbAXqPV+lDT$v!D4TKM)0qZFe)A@ d5cag_Hӓq .*'J.R&-'-P_![njY:{sP:.#xИ9Fh"re8`#4Q m6"@sa^ vkXvHȲ̰r;00#IiH&v狞qOݏ%C^"T"k~ko} UjcAD iQP1PI֒ZLcP "ܳpVM{{H]%P0q,R΂ER"Fx#rRc00>!Q"UdJj!4hA0CNpF,n>pY}!Og$AUV_8W."l^$oT\Y6+=C $6־kuj6tl\"ٯկo;%%y'DEcVCķiLS(; A#L9K)(SLϦ37ga^SȀP& -,I6+_aBoB1Kc,>.f/#LVCu~)I=K0?4[uqq>).msjC6?~5r8 ^$|Ǎ#IB/c\5kYv yJfI-bDff,bD/#"㸃=!i^%G_4`ut_k7OO?xq7}?0)s> Gjln ǻi~ލ{F B6MBװͯyb@6bL`q$]}\L*k}+#Ou1Mo4Vͥ b%ϟo}7?tp3s?C- daA޾q翼~w߿~/^~ xO/߾l`)-!tyO^/նWF4x5emz>Mޫ~}ľoml7 ~bNVh[ze V.QO6lYbp2GԨfxױ:~[Wx ռ0XZO=i#=&vqg3t  2Q97XL7;% caܶ&;Kvgw8,(w?bCx^lK109fC"I'㜉Tp>k+IQ{Vu&izw{n КL<5wʹvqK1=!=kԪ>t4;pN3G o hٳJ+t :Yʜx: % HrɘYzG1¹ΙS835p)+{,ٴHɰ$IFr z/IY0ȏ$eԤ@xn鍡jLnypƝ6YRdJ3h1A}6ml ή+f$D6 >NIC2ڔM_V"!*8lh}7168} 6M}}!PG7̃0Tuڐ0ۚMk331V ^%Doo喝ךtmt=s'rY>Z8tF00Yٝ](O9)PB ^6jIUfUXY-Ea>=}'< N(fB )RњUJSZ}vlMXsu V]y 'dSV2egYkp;A]F2GQ+?kVjc$/~ªA^_ªCB髜h. j%U9fBn,ux+!DiuJ [ N5GJL8Us6?H1m6:9vc;X^yUe Fs׽}6VV0ceb`Yf"cquP^s1n$@keX-*ن$gIO19hH&ьKn&&Q[}whFUc{:cW"Z"@+mFl47ďZЯ.mA&O6d!Կ8+oBE&y)RRgό) VEK&3yB($.VQc{Sԓu3L69 ST+JpHmfl vflOWi [}д̅£rmzf@'8L>N¼Mɟ}rWW7_ ?gfl-USЄ`*\7"9\3267{.2½g!mr%Q3uێ MªR&A58ߌ⊉nM:em2k{ )WQ1Hf(t$H6d#^L{@u )!.x#A#4C*zL64 #&]aD3bψOI<\2I&:"vpf ;Ag :b^Ȁڵ魡(-6NRpjҔG-XF#JdBK ͝"`۶cۂ a}jlMJEֲs:=/ /PI aڔ%!D3%Za )`WAA BϋGŮakұ/ʖpO(l!~d3Q_= ò' pyz JχSn+4HoN` #γdWxaC3&{0Y}Š%͖̹zE3N0 L  cN(@/[jǦ$KufAUY !#zj1!3ډL+Mxfm_gUgCzr$v]ڜXWM`g)?oMIZiaqzCW}NHά2IΙ\NH)yjv>O Q2穝aے +u)th%:]!Jz:C0 +Y9-T]+D*HWF&J*Z%UBWVSuB}ђ++8 BSc ruBtutejWP~<ᙋrU]^\- 'V G䗄H_~3Lg_0W8ȊҊᗜ^2yAQѸYq "a7wDjsZ M4JKlh@Ӭ} c)b rU ]!ZiNWRӞΑ4^j5>Q79^Q$F&jR8f<x*hYA_}2PwerBaT [ `-mJy\*P9po~g+m&WYUYsň.0DɓyH,#Kƾ mҞeb:”Onb}"^DwI16wu[I`Q*Q~Yb6 :ŗI*4/Ȣ38۲h%DuQZ[4ghp)6 H)th)c]+@xOWHWxaiAt'\o\nd)th%P*9ҕTAt1tp˹Ю3+jxo ^]!\aK+Dt Q>xJk"yI 6BVBW:OWR螮ΐ@SQ oͨ`V@f|fT*HL-MXPFRI17:_._V/gtR kPЮ7N`۠k'Duq> 3&T?%,,=trv2`.&;U}53ft7~D㛛iEo<>w4a<{/5Š(%:ʹ7= tz$i,a[F;&t[ἄzŎ}0pj/ =ݺ1X,^{QքYv`b\M+AKb-mCӇRjv.C 3UU5PVlivu?-վОvfPQ iR n 7yDK2̈Вa-A nK׎iqncNI P7J0Bb}jk|]u |Vrgz?f֞ٸ~F,1)yڒ<-Js7t=%?LSp,/=lamW>%X{9!֬wK IYH[8)u4EEk`)h1OWh]G'DiIU%[j ʕ=]]jaVW ľB1M(Dl, _m'[~_'NY rvC5@Ӽ}X*{jqwW k„IPרW'#0/jxPY& F)gD Vx%HG8(0s5RS-)bur~"ST ؟s~2#4JCWװRI:]!ʕ=]]qc(IBCWWQ ]!Z&NWΐ +Q ]!\mK+Dk;]z=]]I Ite9tp.1]+,LtOWgHWʂb +|꺻5BVu!1xt%L)dqqJ5\VѺ֕$&XDr+wX ~~s`;.=`gϟŖdVEe䧇 H[aӇ.\0if.T`[vizc+B֧K>!LHQ%jG{Dٵ^W,%uψm9%=eFtute^2X U%S`szNdp+iz7Kw+鉣vC{^hz7c@M]z0yX R ]!\K+DY Q9ҕ&;jQ lq_S_?x. :dW~X;r BFrsw*2jأi'UZ{\XJy!j)-*"XA .p)1X4D)DoќEéaDWXCWWR uBVtut%e$ r J] ]!e/>K - +,?u]!\Vvh:]!ʕ+Ŕ2%iWXr+K]!Z.NWR9ҕf+WR λ%%3+é!%pA+1]+DiiOWgHW MxAt*n97Ү4d8K2RTr@4]o#7W~ŗ`a_XkGf?NW-]#])2=cI M:ez $~#+?0ј3)gFkj?" |Oi m]e1 CYZ: ]%DWэ+)i ]etQm J+,SP]eT62[WPҕi]!`)Yc*U)tQv,JII[iV5դ1t2Rw(9tutPZ4f1trh}*9ҕQDh]!c TEWluDkKWgDWY\@b3WĔǹ2%&[-/cN > -%'rCIkFr-Iu4r)rhoj.s~ )w졊kZUlЅB{2@;MeRD,i6^JfxӰedݻdS2ymՅ@Hp'iC"ē蓋"&j(Y2;|j%x gWʦau2ʺaGØ&Z69t2hAם2JaZ:C XU{$wNW%-]!] ` f9rHc62ZNW%S-]!]Iȉi XUk "ڃT2J:HW %|՟2`h]eKfuΐB6i/Ou DS*5DkWHWFi 9K.kL CF tQΒеT8VQ;[](>9s?N}~h鉒3CY5;M:Pt[kv&p7药c8%>,_'/v0>E2S[濊,.?5|/%R`MA5 \򘍆~u% eAdAq:v?f8˺x H"W4T?OX|yiA72"v4MQG(ޤ`Ϻ?_t䆪rљRoзΗK˦V[26dcZpS0eA/PgYB rDz^?}L5Z*?C'c aus%H6lЦ={$SM+*29)`*g϶Ԕ]x.ګ+[]]CY# 귎ʍԚcRAied e^F8+QVʌRBrvuH2*$:hh%*IѸTMHRU3cep3cuJe\Xd u\(Z.<(.Me] i0"F??^x3JgGS%IT*T6)s{yR'nΞ pg69r'!(fJmД2L>&╮+p\̺veV6nY`7I qKZ$HPhYN$t윳h4 (LzZDNeHhϣB x"^XģSaepah 1c]2 #[F$sT$c X 5,&DF5sC~֧לQKړT,eqy QF4Sxޣ'@q(|]ŌXpq⸘)W:+7"x[^$(:ri M&&N( D=Fs<s $@5-/‡I[PṪo\<-W5~OM9Gn] l610OQӧð},y)[䧋ӟ^aGco÷t~i`lg豫F1\ufq9!sl(2=7y\u?EZU\u>daEד8o2uI?ze =%\Ѥ?^ ^z\b&Z8P/af?I{²I<^ǜc4OG;%'#;In>K B[`+u :ӆ>}9qi6yS [^w>ςW&Y`$i4Mx&)LɘJEAjU[-٬@N%eL*$ <#:j>1։sLIExbUoTgj7 Zۤ /(Vb}8Oq&Wy9vI5}Nf&dfG޳L+hJBfFˠ %mB9&d*&(l rnKzU}*_DT}22zP* :\RR=v0rhryt+λ<_Yo߂IFb,PIX'*pv_]$:|ғ! RyavJ[1;{&(fZG'˗)Wd[˰TxiC&@@ \(J I5B 'Y;%.qcK4W2 j52Htzǀcmu% 3Flm EN9nԜMmb=58_y"SrE͝ ژl Wdsg=;\a>ln4n a%%BwH S2S)izí7z5AxѤW UBWjOe(&H(^7^/:ǃU}TزϦ-Ufݫ'sdtŭ~sa>et.t\\=v`E͋+ܛeXswvSg5/,pC}fz1Mwz*R,Ea]_y.jќ7^ts6OeG]N? NطVy)\uDS?SOSϑWsG~\jI}PʄNFwL(Yc}XdDoL^I)W>4D*]8A ' >t6gLCl~u.1hSN-;-L|27u&\067DpfK |.r.hBő;-#!#WZbhՃUQjRT&8cVsOLkc S`*M$)v1~~)/-yJ\J#˕'ͷ4|ݷ7hhmp'JT f) jK<p4zZ֘ǫdѣj!PimnH_vn!*|sIզj+Nj?\\yxHEb~/"%)P$;"@3~z{^ e6DXD O9 @ӊ,J*8=Oz2$@1Q*bA"r( %9NJ>YڬswiLjn5sцᐍo(jٍܷ.C|l^jC #";+IRFAO!Q!c49ܤ 5;g8.U)%A:d$8ki$"WGFe"8S;Qui0=4,.$BTpd=f 'Ǭ(hq8i)PV]*Ff/8~hחzvl>E ;*˩5 g x俶hW`W}'UYnyNat?P7UHui5 y!FCKf {8FFr v>5xi ~׽&I4m##cQHE}$}ʵYc*mq-G`uhՅTG֜'&VlvP!T bqPy&IX-$%bfh:]‚έ~#~Eӥ\]\L9CZd,0j^PyT=y"R}6EPcan;N5A-!zCJsN/^ox<:'~S"hv^W8QyQdk))2̀zSaFD f _a!Cm42*)vK-TJ_:uh! T aL) P.Ke %s;P8L]@ɮP^XPP@Qb dRRFq)'`f9'GD-0(0"}CJ緻en.VNe3 6Jw()XsB9T7F{MN"ID$(;l2?Ʒ34˅U&+lu2~LV[Wq\m@:w]<[nZν/O.磱MP@biĕ;+S?S.8ħ8t텥ޥ֘dƃD)Fļ2TLpJi@M\ 4 H FpU Xj&%y.=Mm'ㅳ`q. Iqs`H-ۤiH,Wڴ3(ʨnӌdyn}]ufuobKM|ۏ^F -.:G6၄  ^0JҥMhIdtI8#!Dd{="cR:~";KٮgZE LW V$ДjD<Pf| Q{[?Oi $ϣZ8,5PN-{M1\#joiq70>,f/| N( N(+rrHNl/Irqm/(P7ZOHt6ԞhUJ _;:9(Nn \>?6Pb?WOkqe+ExKf[x^No.N?*9*ڤyS7 \>a ('tյ-wT%9B(6ߞkke~›ۛo׍G(6O\/I/w AQ|텝^8NԴĉmI[k7C'bi ΓmN_/zvrtsնVkKqJgs$7,Q|~^TdO)=tpOj*M ?@q||~Ǐo޽M||_o? N'Ci̘Y/@HP%" Kd9dsyvIm_RƁ9~6XM <ڇm D7wӚ!ݔH˪^B H/w-1w$ǜ X뎹堄=+ٲvbCyIBxQ\_S n:޹2JI(g 3ԄlZԫ{3T|x?LsgEq;8s/8C6^>hYũor4ʸaDEH5'0Dkx.j3&(/<2`MQ.ocKmz4]e1?JXYE7x_^?Nb+=߽ |l~|L?Oj̋oGl97͸:/,kó 3>[<2Vfƈzd,FK`Rˤ2DUETpuo*"Z]Q"k`lVZ9zZ-%"R@:9] BHZ!^4s;PŁ֝-)d7vh_&r6R4O$ZXB%R c>Yqj8)D},3xESF%ea PVXe$#1(Z۾^D{hq?.oX s\Nڶ3%S:Ld2# e VZRlT"dY#x3As:|! Jrc@'J ?~Mz2=`[_df^n\J⵭5XSMSzpBr;So &Zog۵V}]!3/OۯoQ\OoQxZRUPqԙ'FP Ώ\yuĝPJ;&SM(SW7y5N~Yqq^._|p,JnO'M~޼ctϫ=WCDs PaUԵu${Q>g`+BB^WG5r)A|wO|교Az {d.I듄8 OX# ؍Z)wJhH9wFƆT惰m 2E)3=b4m0 B_MFMMF)h^!J5ө*>Eup9}QW-]WWRU>gN to5DE]ew]]e|PWQ]IEQW huɠ\8W֔ "`ioU !SuuQr1W{2`ٟ W:o]!JEuՕRI#uQW}QWHUF9_2 ޜ ?a'Ñnp%= :RP[; j t2FT0V, ӂj3h2HMgBFMgREMgv]Mg@5 tN41QܗcU0eٲ_u{Ѓu;ndVڔrF`%KA+-g.2.bLh)z~f>+|heYOP>eU,2\2ZC%'tPWP] N z2\DF{lP 3W$əc-u*uh뼺(+TW#uߢpEv'9W G*٪p7+D *5+L 6G n\-t>24CkTWF(&D3`Cz.g!V[W|W/F]Il%9!_v5^qvX1RQ{.ʎ_ca7޵6n$E01KܗA?meIqE&E([xfu]]E _Z>]䖼 ~>ϺVhY ષ J gJ;ϑCEeQ171֒ cB8 }'uzcxRl?>ɢJ[Ys}OzuƇfRa.};wx|rO+2e^ݍ?&F-{5Zaȡ *40C4JCX@R)_nwO'_;YLj=gO竧^0&w_1~~1py9OE! WM럯E&"ycZb~M3!ŷI .>_T-5 C!fH ^ w 55X@UN>ߚiܟsWع`i 2(߆, v,]B u9:,!HK0$QHU>ca!އ` "υO}0l] r0.I>pl%a*aR/1",DDꥦ0+Cʘ0(à4hy'ZZߧ wχpZʄ2a >jb w MX5u[mO A^pJ1^TUZӠThZ3%]3`΄y)Fk^IC%Z uv؎ Շ7IœI<)np0sHg[E‰MG2F ԍyש k/9v¨1|%gS4a6ȲXZ@+r&M jIgĭƗLCXuΤ_3B(#&~Aݑyd0=Od'2'9VDye1`JhCP $:P^{A1[ϰZ<_q s!R(mz_a޻(/Iۅ| vd6/W" vП ՓZk9 ijFD(It"NF6j=R.cJ,>N#E#y0)N1FrnpP(YJk!exJ͠]ŀ%T%LjQ-Ɯ 9Bq^3l]!tRߚԯ "􅰮Cm? c PԶe7`%ԆO\zJ^dӱ }RNSC,xY-RVuNq 0~i+ߧ3) ?V}V(~eX]!q6.$z$~݋w r]Qw!I<ձPuBI L#xÁ{l-1]0S_FJmﮓ_yxHK\.Eyd5)"/2VoPQV#NR6 ~ yҀ>h QjRE{p ֦z=I=(o:KԒ\r"%n)o[ՑUY7FεI]U4|~/osl/@?`4}s8h8EqIN0 K@jC PמD Ff4''jva[k.,&bz7)bW ".x]51ӕ!w(hw C){ B(y`m6Q$-c.B 13ƴΘ .LDfPjgI..;;Wcx9C@Ƽ2#c$9ϣ 80vC{}zWo?%qTLj}7" 0yctO2aM D'ER^n׮ %O9f{m*`;vKg 0gS6(O9f3X04L`D$4zJҕGg xRzѺoaf 9cD>GN^ruRCQ Z$pZuG.Dm1|^(%wύg4(ÊIZǤ 2 3X8G:%-8Zj2VBooAvg_1V<E+_^Ckξy!x8o>ȸ(8 * LC8ZLU\T8vG~1 Xhm$a2.ؙ4f"Y.OCg+LU˭_6U}ܸZl<4oo[ݑJkG2nϘlwl2$(_;۽/v*(j/)wDaq)sK0U KUwm_[?`(]YLѾ1gf:w>3*{˳g:G$Ke%ixɪ§{a*Gg0!h|תCBWb1Ocɳ3SpߞƭT0DNLWE@8+`A(XHYUKWJ]eW( - m0nw$*n՗-,;y- `9,tCЫ2_,bJU׃Ѽ@&kfv^ACz)e7W`6trޫǿ_yЃDnAka_+2-c1k+e10 NҙVgpbS21TvmMBȏLͶia0ݏ ÇN_ I P}mR9n[ {$*8ʝ7\ 3f4j|1Eg^1Tk%iꡉ~s R-bqs\  8V{ %j݇"^okBJЗc?aTC;g3yRj<5x1/yٜ+ik3DiL}5lvb/Ŕ^1q˩[m7e2i}+? ]$@KJcdD6L3Q4簪8˨7NIo;;@%xm>nN`$XI:4 ) IAacTGJ{/㠬V؈Hl@ P.%6!`BLB1jveZb",ȳ4ZcB^IQF$~d2XRKKsnr}~Y5òg&24f?{WV!_f~6Y'+@>dvcŤN`k#KI/Id!R"'XUGH!K0EӠs(*ȜrFJBH€wb|՝)ÒYMš,Ւ9K¶ WqBGv-KD5H]%*s}ѝ 23{(\?c-xT; R{󙿰41+·5U#7'oe >~/0y=L{=<԰|\ _j3򌣛ȓ؋Mb)WH1ao4_ñ:yˇb|_~{߾ s?&b؀lG %JI̞@d5ˆ%pXB^Jϟb a ^mySl v9CFW5\Ĝ>4:{8\ggG䚇:+ggy@s =D9KB۞5C9=&]Szn0;O#eRw%$& eiT!D ˜?$~ء$qH٠`P!,A bC!L%"K:5:o.MI<;2AX!Ev/䢣L`^7!R JR^w* :at{Jճ[=[%YrI0&PA,R{JQѦP뢙P9즰$(}DBl Em3 r..ʒ(P%eqCnbDBlBND_|NRCsvri]!ukoѵeOAv-V>C 9(0#p_|BC^~^ٝ(/FTTl2tk%1E$YH$Be"}\55z+ﭪ`R=IEW& r^y޷)o|ЂE2ee2:&5iP 5GCF]8sïf;9yәZys"F@MMXFlCC]cT5ee1 r5~%qb kD'A˛+^Dl&ELa¤Nޅ)D g^g[7PYWk󹖫xz56vyQЯ]:P+TjZ/UDV=д'e+,5&e+FzYJ%CY,tNi~6ϴ!fm*γ=.kjvMP4"+}z2etpɳ32v:A!f}aINSmɚGj áz-7@#G1gi|$bA, HQ\E*P%B^x/m!2Һ`)S6SA l а.$g ,$VH-S(v,sՍj&EI۴Ůu'[廱Uw{D[lrў`oreh}7gAXp#qK־nxe8*ߝ_/(X)H(멥w)!env& J5:MhȖJPT$ kަ₨ΩmNٸB3rFlΰ3 c,,XxIQZ74ض,bo^ ?>_+#6% Uh|bbIIZ6ߴ4Ret*J,2uؒd╫¦d0*pR.+gJ3rF05sWP3]ǨM>XqIFHEH25d$K iWDx*jM<.„`c[HE4[؋2[ƚ!EÆcgTtNuꝑs7N}h:aW3DtD< m/BWV)8ʓBldIF@Ie A$!}㬕DWELH!&"3ȞPVz*f]vE=f8mµKθdK\:V{)KȘD"].Ƙ{ZFr$fml )x/qǶx;-_.6_LgՑj{=OhӲwH~dmFbU%܏YnfK4j0ԧ]zګ FyMA4P`"!MafG>zfEXEJ/  o&QhhW(2K@gl%C6ڎZ4!(k (Z[sѠbQ@Āb5d]tEuڥ[u[V8['L3dz)yg iwփX*Y L'2}ޣ{jLB7*J=J]V\xS΂'LQ \ \UjuT~peOU`Ǻ/p*]J 8WdиGpU &7pUu{WLuT";+g++&}JKJf x<30>1\=\Ol]=ZDè\\ pH(o]=^rL?Z; mOA3O'a Aۯ~MFK7gݠhPϕ=Hջ7Jj5}hyL8v՜Qe6E:_៳mERI>FMH5Hu>Z#n{F#I=I`F'UrTuzuS9:I\I?hޛa OH %χjLh7-]٤F&aXYM,ϝCdnBbL! 1V>BDWW𜍎/rA3̅ $ܱPYNm g~!)YEԒgZA`sy`L-Xɒ['Fe}6kպ'TECQF$}.Vkd̪$zLP#(0 HV?Ve_ t%FzgCds[9ɳZg/X?{ƑWQ[Y΋gfQW# eļAH(R!h62*a 7,ю4V\ՐRI'мXt):Q} ` ]&|3}E)iDw"R]4V@`+HOSI׆RDȗBi "Hc-OXݫ N6h < uC9ݖGDP7P&ܪʃZ]ė\Ɛg-K751ѕd,&,YF̠ 둔>cq^Hc; q [ZwUTiRS`Q1 ]AАb4p.H#HcBL dJSo%Ce:|=ƲdgS dXdcnB AvEm%˲fH57]\?A0nP(SP|kB C9b" Ä#Bl `L1R"%8TRp 3k'T bXS9(ȱ#3إ~.7@4ަ2ÙPHqHqƢ+x3tdгXո (VǑ4f^n!GAU3_Mr^KY%㠄j YV+"1۽@VQ=r+Z-0{h2&- ԙyAYٺKkwᗢHO̺HN*ƄULlc:IB1!bE}fͰ`S9!KW.ՂYe ڦO{$ hFYd :!JrJ2Ba1%OpHv9/k=/3X~AyjN3H I"9d^U0P>xmBV58 elGc=i(^BB$>y _V 5ȤuՅ@vdm~tXTu~jPJ4%᫘wV4<ܶe8Tx*Bǧr-. O&!z*4ʘ}6A4["%xB>hAjy1AJ._>qLBP=n;H H% PJ6CܖSBEk .IҎn<P9.!z-|B bjwXC-D;T3ڄ`io޲N+}DlF ((CȚ\\3v헍EQpAb&N zP] <(B"(o*"㬪ZU򀱨fl~)DʹB s 0+ Бp64IxTEB*HxnQ ZM^UrDu_42信n$BG lӨ{ŷ%e SaCj)h\^6>hygysdjl!_vsrj2IVӾB Bz qt*D4IҢJ$v=M("mð]k Q^$zhPAI;OFbn؈-ʌؓUz <%2`Crh[SP.O(7"fh[^r%5 2T2(eFAjp*1#K[=x"2PAzZT}cÖ+QB|?}a+Bq&SK:)H 9,RЇxymèIP(EeQAa6:Hnawu#E =JW6"R c-S{N낙YVjҢh&EJ3i^& &#fZv!; Z:-> vG&zfԬRcU18`mQi#b>` GA+J f̀| =ʤ轐ߟfhJnÌ k|Tp(֞ +JOPCX2\Np @\T".m TS.*w3?. L/ caVCMZKAH2bNˡ %0p$,X yۨOp]g튆358 N}`AA(ԋA}Ϯ泳cdJy2SLd/Wzі7~?LZ Nu/8[X~u^M0*Tς*SGC@t)chzx9 u0׉PoQsCݏPxe<E(Ѕғ)Ohʦ)^ҶxϚw~YkX}<=w3خ{f O=d;kEnOe}AB!Hz% 6PHTN6\e(,d6(}CF} 9/f`~F;\w y"4uMNV'ULԆ)h$TWCR)iiA;'Uԏdm7V# 9>Q m|30?y•py<2b~cHGke5SӖGB˶|t^B7ƙdeLm^#f9iapg<4r,|Xxkߋ7iጋ9M8,'NO6y5J+Thb~W &Խ|imZ%mmQ T9+(ljj7իX}OdR:..>ތr9#v>[%süQ;ځ6+CS:>w?ZPaQv)(: K9*/=[SBe e( -#&SM k%ӧ483x|[\5s<4E?wk`W l?=ߑG=]e˥D 0\qmgt%)LAg :SЙt3)LAg :SЙt3)LAg :SЙt3)LAg :SЙt3)LAg :SЙt3)LAg :SЙu: _ݕoNU)b\z2+nSzP|9VtIrjTQ ޳mZ?䋾W6[何[1rTT@ˉ"UE*(C }C4F+d2x'Qar/I^\>m>6w/{]yXb[ pn0ܟO^ `D_00WR6!k~0?0aR' l\ Dy?bFUl.&uF-'Je oU!wͫO'^}\~~-?EOܡ]oÊ!8 X黷n\63R}ș6;}^]'TuN)a k޵ի Gzm3W6dnO1{ w }J'٥k>:Rsz ١lvu\oG*v. -Oz0la3 a f6ð0la3 a f6ð0la3 a f6ð0la3 a f6ð0la3 a f6ð0la3 a f~X،/%/jb:a-װF3w\8<mwMlVSof Ѧ5j P5OsrRZe?zO/v]6t(X؏o2jkz܎}{*s|ZT>n[^tlWQFޫ] -C}mZ)_+Grd`ֵir%;-UFj䨲E*A-M SFWs,)~?pw~ڻD̲.҅RRj{ԭz~w.=ߋ;EV2xNg1ʀQ2` e(F0ʀQ2` e(F0ʀQ2` e(F0ʀQ2` e(F0ʀQ2` e(F0ʀQ2` e Э/e~9(2׾YsG1GD@zܼ7D`֑QN43]&'mme:@C}֤\LgLK-+}.aŽ4Y5"CTGDRUYQAL:yi+;GA[FLDݻ7ܣʻc?S^_o}~'T=~&d|05j7w?}5m/;7_vlgɫvUBѵbo>ޛZou(]Ϧ˞u.(ۥ;d^V܅컼f<`0 H6ثbo[< Dc4ss099`ss099`ss099`ss099`ss099`ss09? ˅zW늖r9Kku]uj/ ~9X݋2D-~XHkVb}?p ?l+X&k^\IdSn!MڂhU) P20i IDAڋPZ!ETI/'U&I㡽vX3{9wO* }'=9d+3m>ݘ;7^ϭ֫/ڸ%̷^O޹vMO3{/̕;b֨ƈRd}N>&øK=Yn;xz7-wUN#oz'ʖVQZ۔cHA*]QgKuJ*YUZiG,s\"BO"w"IPH|sezbkkkyMusNϡWn;~w n_O$15s.1?|l^ _:i8zqؿB/9ޣa &ٷ@|}ڂe+q~}I(S2m6ٙت.vWUWUOjn1^ ad^Sk{ǁ- MxU_k>͏8ѯg ބqY)y.26:Cwf~a4>D_zHTw3VpuGWwf]\Im.g6zsRN-uzr:jQAu gmQZJ57g2KS}vRA 28gb1$>@N@AzET:o=>h٩gd=31ƥtJ. vb`,*1zox渦t֡/}t;@oD9&@ԡQܳHBr˸┬CXt+osz{J ]yiKMAw-<n;Rv%we=˴z'XRyQ+@`<Z%"0H2e^t#xƓ٥FK&y+’TMxl `SJdTHh=5 %O''ƍ_^*"hYrz;FܧTB62jŘSA<C9界@` θ~X?%'% OrMZp-X[>rTQap4D-Uh .qQnqNwg0Х)<DY*[BJZ> MB7*/&E -eV#)u;SQ_"'a2DsaPx2CᐚC]g<ѲU"S_Pe.RpB7\!Ep=fF )u"ԗ[L\VoQ:HwOZ{1j3`amavyF4P.IvK+c-=Nw ^ycBk>hxV*Nã,N ,Iɚ;Dd:1(#ĽlhD Y9)y[fmx̤*qJnyI>}fX)XόB=\k8=tN7Z2>3Jux}N% D;ǵӞԆO2Fl-5L5Q܅NO,Sώ":zчp͹gR e4" -`ȇ@=6Oa_D.^0y myaQk/l]-h{ ؾc?ez˪W\bPJe.?]؅Rɗ*vQo)B.l!s.JUbL@ǜ1s5Sf!QS)ZZqPV+lDT$vjB#a lSMxOc*5(K)µZ8z<؄ ~4=hTv-z0155#?na;Z(GG4lDs dcF+"1P#S#ld1`aÆQd=Gxb|Ⴥof؞tVN¤FF`:dcF:$Ύ1q"PDޫȔ2РG9X Q:(:=Ύp:|QvVkӀRwYU ?8Ñ$N y.,ߕ>$$kW5* J01.<odE+ꃫpݘRԫ+o5Rf7Տr 6q>sFs_!t6 Yq GH|]R%0%[{ztV OkzcrA%2eT/vAVtyq/ (31}iH4;^hX>#PK+X?.F.^g^NY =%fmzW:Z/f041aOç_a?V` g (OC?SǟZw557*bjL|z+|y- ڡ[%Q%m8u) ~r_^ҖSaR.|_ɪ՝pDXI–ǃQCtT@mi#k{Iy_;V3ǹP`+)U/7;%`cͤFn8Kvҋ;"//?b=>ɵr&HC䈒8@ K爄8x'QjuwTN_g4qPNsށUrsчKÍNч[Ԫ>(hL먁ɠAGA<^I%p4,H+Qd&+S>u !}O L\֠pS"9D k5^wA< iP$QS z/ ; nĠ&X^N[Kw@^*m4U aU8Rn+:ryA:Cg_ NY<8` "L{+9#C`yѺ JDHaƞYY&gG`9 &T /JIa%U3v!rk僭ܴ:tݍwgLz3?395q-m2x{w1s|S\C7׎/Ss1f;pi/9v3|%gS`}R=} Gd[U]ng̛=ݸn8n E6q\C eaK9<ߛڛ_b|W4 m=GhEkrIg\^#<<³j1SFW]i ZQʪjP&(U#TBi}RUT~P3߯YZSkAM"V !k"}$ ɬךgݐd Pt}{Њ=At_Gm%G-:g:yh ɣeH(Ś"=u,RNq,c*h1ʽ3ø)acgYg<,)58B"5ʅو\ySSz8kzWS.Hޗ|2 LǮ*1z$}UIuqpTՕC1Ͻ "WUj,^iH΍9Vb#D;]jlƞKUP)FH#&r혏 F<NTAH0AT9o~S;۽/ðvVh ޵󹹝]VcZU~ 3K_xAbTVRmƊG Vx-T jaBrRN9f*kc&Ssgт5#*,%! V93tzxKk/]< K!ﰾ@R*lp(ʊcGyJ▱])nppj-: FA HX7,Wz r&YGK!zƌƁZ16+±%i}) 5kl>^]c: {YR LW/jx^LcJ4#2h֕e b_e-ӄVe-S ̄+W a^E/OW-&Urۋ.H˭'gsE~+rdqW\텭&8j?,OM\=[0SKzł]uSj,Pr^?>Ứ_m5CsDE&Y5sm_vd^) G 43=T[rK{{oIn-u1 =JEMwL$1Tz !ݷwN7@p34Fӈr44ilR5MUȫ:{bZf6cQ:;c7j5pYs_Mn\OJJg^7l?A#ab9XLU} Kv~$7g-~?[?˿u5~u~}w[mW&Kr&"wrZ+UUv79FV0}Ԩ`('})B(&%voqT`8E4,ef 93#t2x[%G~wm ~0l +=~z($F&0ҕVFW+mMl1ȺꎮT){6B\%،_WD\UuS0ұj6"Z+SQ.ʄf+n}yInP\tl"J%:+'t 7puEf]uPW^h9( W*EWH|BJ NO y2HD¥+`i}4Ѫ1'k;>wrIxxR4 lOK\'QtTx+ 3*CNp0g]uPW a9 "#3ײD&ZRzG'+cN`dQW{."Z'SQܻꢮuqB`<]\tA$?vE2]uQW ^q5^q&k]!esvUwt~z+F3kWj'ǮRuA]/ud0`(hp<$ڠSRYWFWvˬ4_۫4h3<}Ga॔^gkǤۿpk: 8{q-ah}K8(CbYӻf=a>%*1-Lfp9yn9׾Nmy}Dl2F!2Z"^vG}#Vl"•l".(M#%LTWJz5]\tEΦ+LmgՋJ+F0gp+:u]2YWԕQ&@`+W ."Z-9좮 FXF"`'$\hO!+ [FW #m>u]eIoUwt嵗i슀[? =E+ i"몃 qt- q%vURRಮ^ܖYXUm?5-oGZwG["t岮vzoGS$w@ vzQ֧mn0H&`GӄkhhM]D|t5M \tRh^t-HiQ)iH˦D 4SZ'd'Rr`\|G q8.B.7 )YWԕvFFWk=]m[{FQhɺz]\ᢀ=]d3J&Q"\ŦwE&vtujtzOmckK~(`(@bh|t x{`5Cu-Q&pM靳 x-&fZsjV=9\2+q9z'WQ<%*s-+J -Xl,p۞G{gG4H B爦(1ltE Ѷ'oeP [)6"\ {-B"JNj+Jc+v|A@ QJȺ꠮,v5']p6B\-V&߻"JuE]9) "pP(."c.KY #p|tEfhU+LmMՋ*`g+vlµltE!HDWeևm-vm]7R*R&v@KUȺ9뽕'dP=pCĕvBygO.>ݠ4Zy6&ܶhݨi4m5"'H$s@KrInu]ͺꢮ8=F`QְQu.*8ݘO828#olJ@B0Ëo>Ul>]\m\ԅfG4Oi_r4ķOQZTcc<?7c'=j]ecNݜG~?(V-{ذ7/{_$o+o @U$ukZJ mS1?޾TQ]ZTT(Cmo8M 4;cFz~^ᵪ =w%R ^a&.7r ?kW ~)f{rG ~^y~m{PUUI ݜoz/Y@[{ƛɦFVsm[nFQÛYNg-84 UltEP*RQ< u ;؉zןg ; ,}#ѸGs=A*2jK]m>uS{O4=dC@V ܣ N8X0s3S;O+y01 J#&EiD9>\-:g͇q;* JF{&#DrHH=#Jr$HNz-%']p㶹S4:u]%ج {*#]!pp7H.R H~(d]uPW:Ha9=C`> J6 ꐺ2z]ԕ X8e+•@"Jk:+'yk"`F7.BZɏ]ɺꢮ#]ltE|ƮփI]WD |HWAc]!Od J^`ˬ|h]WQڶ=!׵<EklGWq)-"tYW;g1!4'~`M&_>/?YiE[su+ :EuqYwC?EYY_f?S6[r_ oxUG >/<1py})~4A_~s/vgz?ff]Zw6X>/zӺ⯋ðG [ݐ Uow)tmQMYs}Y/ p]|Ns_V7UFDB;`T{У*LԨrZƣJ+|ʱO-++Yh_U%m 8ŗʻתOX@է@.뗧f/gUӖuŌb9j?bQRv66ڂݟSl4utq~j7-bUmŪZck]ËYJꅧ=q5jzkfq˳ENo)%..NnƈA*|y{꼏L?\\ܿ+?N/V7f9/6QY1ڨy]Ű@}w|/-d+||C~"S{; 0-.Vlm9...fj5)c{hσ P]+NWİBgbxz  :;uf@,BˋEj7|91}2" /ΰ^գ%Lٺ{Zn)p]>w ޟ؅}70R8iTp3L̀2*HRA#PFCc>ɡíB߽lTmBMo$&X|vWbG+||9ߗ6jHjZ:m&(*&tv2XYhY^UKaD'"[I"Ije@ }v~6/CFJܤ**!5ư֪rH1HW#'*&4Zm9 —(ǏyqN,y6[,##u.X?iZ}Au^l<9oy<ʈ31P!foDduM}>_]Ӫt(6z5@"u}q?2};х+UHA!fxɘ$rǎG²[}^1g(`u () =.3)UF vTW#ȑ/eR Z9DX|^+;&Wb}ڌBQuz;\y.Ux5%#QaLRJ +x!MEqJ NOvDPte}8% V$%)$֑rP+p^7k:սLѧvd=n|jWMoZbt`]u1^aV6{a}_Wꏔ,t*2ǭLT/N3У0. 3 +*]X-kmH_v#a8fYB?%(KRVW=3|CDcviVtwUZ'545ILPkEw\:q9LZ\z.>7?( &E-cRy] c,#:%-8Zj2VBo_\z+Rc6y>VBصù-o^ȾdbfIo+˜RYl#L6 xn2`@3b8A89f"A e3i9(K"Utfy0\!}%`iŹ$k /31UMv`};-*.;tBQ+z=Um\1V`ubs{\11:s~8h:=wB,65 ^r.YD>7\9L8:4GXJ Rf/v`7aH`'EDjԅF8,awF֪1K9DŽRn&xP!&#"( kZ ضÿ5v5 XZaN>vEK-UZcui?EBS#('?L!Q^;3Xnh!FEe/;xy m$:IA3=H k$5AG0v x$ )[X1c/^ˈiDhnD /[caNd5ă&Qa?nx0YUG=o@p5No0<14=]9xVЄQYc&w60Kcn҉ꍚ^ńOǠuOn06A͂ ̡e^dw #WWZ^h^-][aܿij"pU`#B4o^eҜn6i_w3+u7V-6ۜC_B>Wbn8L9lғ.g6zsR$,oRgg?FO!xMcp"ފ s(,X ABm# IW^PL3샖] S)6srQ\sciw6x6oV_uA5&ϗ& q-Qt$`nHt"NF6j9R.c8>'ҷ|iFHky\h:|AyfdVɹ©Cg*q)4F aVbs4|WKSe H]7 -9JʪVgO*_Sq4/ZQhq+Z%`v Hr%'cf`S<(dv;Qz:i'A CwRxl `SJdTHh=5 4O[a ҺڸSAG5ǫ(c9}W]FFs*`YXy%#h@C褾5_:I}!ТFf6CC:m~!>͙l /ēWUȖY's<0\T!pJuJ1z!uN5OrMZp-X[(mLf `JCX@Rnon{:_`٬I&ɻ(5k2E_+wɯpNc``_{2km#nztƓ mwV OR@9Fi vĠoY%FMeR%&<4L圓8x)_&&pNwcwDAI&'q>jf|,S lm&L82B"$)9 YЧ>[|h!zP6-0!.0f.m'P,[6 r໛[}ԗspt?A`_x|@{Rj׽x0oy0HLȮ> ~Ozt_#Xfz=oQ.0bI r}˷{!u77~hiǼɈm ̣kaWqQo&e!tU[Ϲ'έUS(^w[s[S5X(+ Sf!01+OuT0B$^jiAYQت ҅|\JlB Oc*5(K)µDX` Gg- [ca6=#XU{aK).&b obG h? KʹQ4w,H69D1shE6p|9Ffi!:lE#sC+k>n Pcc_\:_ ;"E Rr hN*"UfE܁9!ʓH&sc`^a(//<njQ` kBJm 1(!-J WӘHS sI-1(F"-YӇ;I[SťF*% *chm%|L¤FF`.$( R˂t9/c1>]dGW) 2РG9X Q:(:pz d#? =n7Ԣ3{*z\0y'*̲}qpa@dw%a y|כǮA=QnGn06ӬV` $ "a]\IA)Jf: vp1W$ VVtb !vvKv H]ë˴pVd3`y1:|fa/ٮWv:ɺ^URΖC:M01h)诘%Op_lT:7XITDqkXo.ߟ__?^%tq>\~DO0)P 'O`Qmu ͻ56ߺ&6_G;} {k!@~~ן0*N=Ů Vq?>deb^a-W(a+Ir{pP! 0q@4FZGIQ}vj6w RɛöfRiun$Kvvҳ;*//sr͋ylyq&HC䈒89< qN8c9n•o\b[[GŔ?=Y);g8TxҳVu;/*ݩ \,#09{KpIQ!PIO_xSM{X|4Ow":)܎&ObXHur>STP- gߟIJ|?K*farQp&XE/md)Y zE˛eЛVa1A -c-h)4!<&UPvŰug˸]@nfUQ;+{)%'j4+);pZ*߷A?fe>]yEP{`fgf Vc؂do} 9M rs>Z Gc΂RN-mr)CrF19C4–ဂ qBJ@1hD!cmF6m/pM3rXQ@†D+N1(D'`ø`d3[nNKcT`,PgG/6**b)7{JxhY@ZcP颠SX<8`&Wioe9gdL8"ZR8! k;a9LBy ʾМфs`B(Mr&EIgv!h-Ot$.Rss0x -s;JN6CQe{;Gxsɪݴ/_]T1d<-鴗;kHE>ג)E >잮/KzK"K vY4 ?g)Oc,  wuq8Uk 0[;[s{m\-$O}=lɊէVtJRV7>88A+Z΍]em~8**ϧ3>*yBk9#ӛTì٬ФPdι]vUP?5}{OtYoo}{-QB%B<zKh犜0Wtyw["h/],Bn^j4-Xss0b\Vdl)zK4/-3b.Ro#*ߟ<*Z~+m||;\5^?_OWKsc3_N|5gYWY f#CoYw~?ƺ4vjܱ$+mRFy3=3=QKu2(`#mwBS ޏcsQQBԥ46p(=py-۠&8y4M{mf4v9/,Ҹ*Vw.'"s?;awxJ//>]m;x{^tznJxfË\jUs*Y+8Oң#d(4q-Sާ[ya\(PCk,BۨG2sG;$EGveyF\+;rf_hU:Π։@B\R)U{dATLǕeuЁ{L=3-iEy=/C?xbRhs\/ u܂fɁzcG4+9 \'hO[o'+.NwmNU_Qgy#Wh˯3д6ѩwV ;;8kD/?vn.vxk(wÂ!ٲ 4o/ӮpKhIE͢OQ:G$Լ*˅զ#\/v'[Hns3sX WDrd_j]zOquR> {*Z)3wk-4Q,X32s^,deH}"gYߵWT5R! #;F[Oyivv21##[ DEReJlEi]9]AE]y[>|:e~rBX҅(zzsis m/upO~k#^;ISq\^v?׿ JQpռVSC|k/y1z˭@;$ |>,ΜآRdL*QhHu6r &%r}dȸ̜ȸ,eq,b,g,Xxo|yڤ]wOvvqq?(ڲOyvD|H^1W.fC-R"nT3WӠIiBSY J &l0"qI$&iʝsWE#2s#v|HSAeq,jŨg>3<>i.$IsM4 fq]KE}NXq$VF0 =CEWizК@TYi#6W!s9{Q?;^s߿=D\gD|Q ѥUUG6ǦlHG%O˾qN|]~-FqtE Jo?So>,,8u^s3;ANDH~ }sy0KGw|"=KpV`~ X)dRԧ79G߻UHE9hzEez~q3;}O yJwO(8S!*~˓ G F傿Z';͟ 燽z8xiUrVҡՈBAިY-JDnk#T%;~;yckLx}2_~Ix[l) չGj2[-Rh4RK"98-f)<_[S\cU"gQ Qj0O\KV1w^@xӃ}kxA҇kpg5aY1)Zኗ#Vi-O=JkWt?i-V)gݻ|ik+w-aTbα{1 "_:dBA8˦<sĚ2ƆJ cuͣ3I? x3sGV#~n1[ kq xSh,R /y&HR-}Z$kcK3X?8cYr+L800|$h͇ Gv*jՇ O:j^^s3\S')'"zzP| ~FUzTևs#T eu#%ȾXpį0Ш#cYWኮ|OY,cW'E8˂WhZe2V#@ʕƲdrx T9( h%یCcHq-Wl:_< a :ı\5T˖=e]+vP4EARZ+J۟2 P*+'lXGl]<3z2YlpT!n RR}^i슫4֓@*es͠j`o]ki2mۨ@CXE 0(3cic n ľP[K 41 IK Äq;?%J1;Ky&8()8YdĠ@ݶ fDyd!(+thonZ!S1%:F@#)#m UW^: +>i,ϽuT"fڒFA#D^Z*3($Ɨ [H nRǁX-@2|$*O.i8]9>U8ˁ.R6$1pz5/Y{?~slpFr"p U9w$:C2/G>gX 2?Ջ~ւ]!EUD="_"]@!mf" ?O\ټ`868צ d(]Ԇj2Ó4HMHv 9AK.yCq-6;z$Q*D&rZcdPeB'.o˂9ZsC `J0PVF`T#VP܁̐m@G U'K,TGCW =WwuvV+Cy׆:No2!}8W?/yvgQT  o.xO>Hc w`^ b΁6DrRڇ|I.:0l0 8k@)@42g{fX@nk3(Z.T͹y@L >@*n˓iK9c4$8O^ 8Qf*cҀTH "sQ<wsdXdY  3f %l P6 VhwțE) [ t) S-~8s໇v!j4ɍF5)ը0ݟ1ҥ#Әy`eVmo?" [zkJۋ423RA a@ v? FJ~f n0͍z3hl+J+vK]^6sT&i$ z u#fnjmr064m4YSǎa^iaT875L0͠@|\`ʏٞ~)Xqa醱BKYZwT`=@@2&CY:k`0@=3FYo88Th /.؊Dֽ܆g@zrֈtWՁ C +[;!LYT12$.#t#y1'Fzu+Z`p@mRWBmyn1ː1I"x4͞@ݻ6Ά~6.<ϑGrUk%%'5=<5(ma.[naw!*XG(m@ FxXB!yi< ʣL "ڞ3 _ VZBnZ)H75>T2'sE#Rf /j&"ZKEv,N6Co")mĢIkm#G_v7-e37fMl"K^=Wz:VfTwKfcY;- fi!kHTr#8ØaT/L~:n&Tz2BL*B]){*n/^Tŋih-qi 7'lNo3Qr3_5OAɒNAYrNAJs :Jc<1.OA_O"W /Z̬ #\ctg;>|GUzRѽwwqA&_nXZA\@ χi&?=kH8ke,ԥP6ƆW3̋ן/T3xgŝ17ת?[='A@?kHMh@$$& 4IIMh@$$& 4IIMh@$$& 4IIMh@$$& 4IIMh@$$& 4IIMh@$$>V gSc_Q:y 4DL$П,3[t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"逎TޢW!`+E1: L): D_yt@Ǩrt@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"逎U$3$Vt^(wZD:# HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: Xt@Wbbz'/Ƙjnnj* hw9͖Y8Yh%+Iqx\u^('1Ȗ>zl`. (]R Bu(L(aBQ8Ut(5CW]/ Ө-k^h- j?wt%jg}*@W{[x2 3f';KⲅjPTBh9` ~z/?7̗M4G9芋 )Z ϼszwn77P(V4h:uFhi[Nn~1Z^fmfaɇ/&-5C?ԇjq|ߛPᣏN;]4 oƓ|gk,Ƿ8C"ܧ1o :+Je!O/FJvzխ(o}GڮZjCne鐸 _< Y߰JJYW*xS rMB){Գ_gNjjD,UjUED%e7%x(RZ&*7ysJ1CXmc U檩Tdu1GfwwVf hD4R"DU#@i9E4H /ir`Lw%DWGHW:Q +lwuCR]!]iۂ ;틡+k+Dt]c+㌴%` &EWW "ZmNWP0xte=cFDWr֮,ƻBuHtuBitA 6ᖳh5]h%E4GHL$BµBtaJi=/Pl!\K+D]#]i%c~ЧvDWGHWUR0UBVt`,B >o?]!]9c+)\b V ]!Z`Q* ^ rv13heSNMtU:][v37x=ӗj/J?q 5aj?BYǂA]i;wgډN Nτ[(:[v,9θNeTlYmGr wɺWNȌp:|L%ΊظZ 9dU*}$cD}P;FvHYmmAtFV);//ObNVE0='`x1+5zƺJΘ*۶v}Ug5X6s"KڥB-!yzzji窋0gb_˗ocD[R˕JV20?~˒%m,vyۋhoM'uFSk[MqArO_ϧ#esIj=[=Ca4=C#}\ ͦųUaeKڂ7鰚 NO-AؘyH^iJrwIFJ<%mϮ>Ɖfrx)_c-}L r~w:;g _ 0SG )+T]WKIYho05M,`vZp(_K۸̦/N׆7A.͖\]{˟ml^X}n2Z_]; ٦O'?D2L)䗺٦vӝ!WFsMxE 5rİIgSU79&Ξ+-eSFM7wztU˜ci|᳑%Ya"~aT :\h1Y2MFIQXZ6QY' vÈB6RY|qvܾ5(07y^nݝCۘ?^-L8ck]0ӄFH+k١/eeN3`JrԆgN Sݚw<5f (X:M-U堣yp1|e= +%b,8A|8k{b*+XkC6AzS2{X`xMVfM,F^ZBZx8;i(d(&y9z/$pK ŠW8P|{v&7yIznhKc01ekեvtj=,OdUoyMHepG`dxʡdoB~y=؟́yսDD_ưZȬ˟kM k`'|Ӵi3B֕j-GWKmZ}x]2?w~no<^V:gcrs> hll>۝0 $'gay<~&8}V^e^Px,l8^㟞OO~'>}?={ U~'ʟ s ؽ!~=\GPUC}몥SCzoS9d{6 )_~]|x_Y`mgMt!]S/aoR\⪈>E]oQRnn27LӾň7H$$?$|DqIJ|jvHfpu|60,I|aMyU;z~\ վd6܋>7υ,?;ކ7|gU6o==u.l"BCs87ԜM ucf{K:יۦ+|~]k8M\Zk`\iļv0>TwlZoydywL ejS?V4xuNcq-]z ^`#ZymmR*f !ab,e]Ġ$ giPbd1 faq7G{^j6-[e~ D¶y&\n~'4|]a9qY[J+aNl50=Ɓx&5LMMsUfok2]MIx3v sQGƙwPZA7Kb bMݨYbv1D7*K@ 1XyNg0h#\H'jq^0NRC3r{r\pSaܿa8 ,@,mc\iyٻFrWtvDd:§Xs@!iER==މ"&|a@ H|_"aD9#P7̽FڛtWnY~(?4Fl45h9yJ>nƗnϺ6mo8 q}vmh {5r9Ul *K j(g{x}2%H#35I&ܼDچ/nͦ/D)q( hR~rWOܮ_XܣOWWIǓ/-}s~oo;$#׸ֶ_X|Ձ'Y4^S{b\>v[[f*}_羕kϟf3<>ôgZeDmF{$uV:@A*JVx+q71[1"IlB+heqR[ep@MG9e||U3vWul-DO`5X("I*I41F_-CUk0҃5VSc/A$NGq_:(]@)a@( g9(e͆XgcfT3a?Fż[:U3Uno|񖷪nDn4ocI:ё&z+J0vg ceWmQ}TP"TrI"W(FEk c*QYs(P YL 2*A.."|cl8:iL)q:}8?ls4Ck=ԅߞnm{{C\Df|iYt:oȤH`r$oBζ+ǿkMR 001 1,.*W1[%Eh g ވ>:N&Oe]_A!pK< D3(CrGi_R!O_\־p(/c,uNb}wM%AvYb0 ߑ&061 5}Kp `Mg#蘂8a$e S5]X 9B ; % rot*qU͔v7q۝ִ1Olfޭ]?ŴD{ywqz7iERQ,ZB!)I8iR II8Adh[TQ,:RE J;[cP:-aC(sGr͛F7V}X8aFGN)!J+m)$(2_N9WBMAX5,& R@6F٤~aNtthOB0ngLr/TS?:d:/][`O9K>,挲\/uA:; brE{D&Y(E#ᕜZ sͭg()x9. &%PK,U)y[$$C!xMƒjpdo;U)?gk|H'j%BP̪(, *dBɂoP3 [u!ix.+eHH Ρ4Y+\lP gB7M2,ӎ k&'b~J27_[2j}ێrg|ohek{zFl)ڛ;p8Ǎg;ג4hoI$ ^k|@j=k{2o]`҇ljFlIfdKbՓcxR+HWRj Tkl85c;L6BXAU>>o_obog۟vuAT'ޠ Yc54M^iɱ C XM zS5ud/$%HjXG$*lan'g(3zSb."ZXc7~MqZ碵IkoݠKM2 ( $DN4UhAQ` 94N]!ӌH32,:BkRѨ\4 s--L CD g>I}M?yXwǹhD?hA#xw)DC+EJE2F#$-N a,27SX ɢ*UP00B J5uV9mWMgbqR/ƶWWO L© 碙4%j=zbM2^<^<}L:^uc}J{Ts̟_gO#Z/n$(y?Z+=ZvXǧ*,#f,OQ݌4QHyωUOPdM~~O^2h}# B>)}()*!c%SZ_]|F5*Vi e_Vu4^Xv@ZAZV36]FISeez6WRU@5NveEIrU)Vc!T [_JW/R+t0hdV~ĹAz-Z&<4-8yGh%,2(4R񊔎R dB(F&rY\8?sq.@;PQ% *VK)RS4 tCѐhcζvJ /0˒:'jaGBgP?e`yD.3MHILQKTBe(ZMV$P myIi)E]Q0:OւAsƜ"Pp+։:*Î?oɳ<-#*9<ֳrVuCwۀ0Sw `^X&»dkc&Fϡ :KXd!Bh. UOӛd/Ym͠ co Ůc6s"sQ^-ӎ0Vi}TZ٠3b|pLpP莲 t @psZYDGDE`.x 02M_) ђfUKͨ-8w~~W1Ҿ2}jw<1KM2Nf\]Zӗrp『N\stƳbxhҢ4S(败!.(ɢhh"+vD_Ů;Xڣ !e7za5Y UxUoV!BsYGI&񭬡5 Aa;ᬫ𑦗tDX{1@ALꀽ(Rh;w:Jeʠ4`.HYSxÃRr)6< )Fヅb!Z%RU[*I٨BaPbBmPq!:~ }/=]/vûn wm6{ zfx_,n?ww9nvfBZv~;_)=";a:oyub}KpO<xzC8/z=uh.twڼ9qĈv0'$3qSL:|y^wteSsk.:'cTDG)N:@ꁩLΎ%CrԆht|Dz\LD&#)I) >(asTTV}^E'lLPmSʜx8?ͦߔ>@.`;d}zn7jfsgk}cKIcܯ^*F#hs T%T!ۚHxc":8BCɮc5ܗQѠ\o'.ih٦e9'㛫,?mIax7d&È[ىv>yoeP-WW\g!OetdtpTp}rSo/n7yf4o4]˲lv}7f9xu}]oGWJ2V?_eb&v))Jٴp=5UtHbZֶ76ȋlfTYÒКshfK͈/*VcK+vB+LJ}5Iil)s2ty}Jt4][4w.~4_& OC.8_8"%4nˈ 쭞:ѽ19xGхsZK?5hn&f*Ҳ@{뵚 DŽ3}hقMlR=S;e+k~&GY/}g~'=ͶZ.jОg{a "$V*;SQ& Tf,FOmy)@u,RH{B^x9>L &]W8xw"7ݔ؜t{o 8PN)XP/vAŦ^R4(#E6>̛RV٘4:%^$AAkW,uu>Psp/*/vG۩v}5{F)4 䂹FZ%( SiX(6 МEQˁL&:;2pW06L`/2WHٛFWJڷavJ52pugey0jA>\F݂+y\zC%V9փ/xN|CnX}7|unYcoC? 7L'Ѐ9i#*lDa^S˜\Ǧ@A Bi$XP2:Jg*(QJ q>հLm8ɟIo2K~׸P,5~LI%>l2$5dGb*-@ LuxZ!`ahǓ|gW}5Í"Pͧwts蚖˷krB,g81? fu-&qKˮQ(ȏyW"E*鉩pJL0KO^{qk XY=iVk;WF%p*T^kfE4 8ia oM>&?r'AD*ιYRl 㱈F^Ec Kh2Z]]h2h^EWkhr]WLs|2[ŚW9}1"qw /))`FUϹ؊[~<-|&T:"fMWGpR-ΤSd:T.l%2QY\ʼnJq녉zeor.ۑ1 )L.ۑ'VS-yGQ"G{%:j'B%)~ '1 P( 8Ɲܧ &J}zYZeLLZJ~ʱ{Xmsi<@eA )vqs(mx90T2ǀOsX/ES5]Q5]PܡdM|لՁߚp֢qיgmZwk-Lkvѥ5Wq1n^$Pu)nLm 3nޗEN>.AaGue*-`~W;0ex d\S˩iLrk9lҖj 41A'aD 9:/i!"%+PtRF-6Khc"1MД&ruPHY"!(O'c[=M[!S>PP#` )>x5Cμֺ6OT3\SZ`m>{yTJIq6gk'k@m5S4 Ҫ]ǫGMԽ4< >냩a7ndշh7.SΠMRk:N+MhQ 1PKwQ߼3&H|2 Y]"֦y8(W\MǛM\G(`4g@|c3!^׃o"(}h>~?|2Av{PlQög/,Gv \+֊V}ۡb~3iZK!Εbܛe`d\ y6V3 ‚ի2u.>Wr*= +UΕ{|jj__0ՌtM5˯2^ڬy--LC(Qqm$ P)dcSF nA28%x3\8;wΟQu67aɄԀf`s9+@0r9+2uYrf z&w}㻖1f"k9ݓU+4ЮL/C/!%;əIQ& -'.!pEQY%1pDȼC _N% )*J+ù'IDEĵ1R ΩEd0e*!r hf JqB9,Y֗/FNx\@t;W~&oJ4m},ժjyjYfdOt"roP7NrA.ю(R @=t~zzz֐ǫdѣh!Pi^ekfP$?t.5鑧duQGQ==gF$=!5r)GsHBUBεғ<^IcKB;">T-Zdv_␕]Dn7VYH_*YAO lV;/6>$x(l:BFړ))=\QIKD+Ni<'Ihn8i4u4N>xĒԯ\"9r 5?ʳ/#x&>}swo> s|x#ǍĢ5_gLӫ=~d|2RG 'wwudkU\nHMt2w&!V:ۦz)lQ1k6kzӘF1/T/-&̾vp=|t"]OͫW~mR3'zgj8F>Usʌ,4]m9xf)zֲ|)M`d6 mT[ gjҀ٠SeL}:i㻶h \5ijg7L&wAR\"uo)xy^g3fQ14nӧ:P%19x E 6z8l<11{%M4lg7LRSyYpeQ{Gd8&'ؚ͌-D/ʈgzdl҄onXOq0ucsǦYəm}_]Ԋ=όxDH&וJ9u.QeB9@Ř hb"Y")TuYW/x\ OIsg='B1ir= 12a6C>V e麚ǻ4)6bi]V/x};6ATrLH79ڃ9 jW!g#\6q>>6 .3<}櫗Fvcv2 i+e W*Tzϰ V'8-^hk݉?{SFEV`zK(vm|7o^)"( /8&s*+"0VÉ(\h.Qe:8up,1L ⍔ m W@G>IFb- PI͚z;g0ѯ,-Xq ._x,5[b 0\d_RrvT|آ#PDQ菏e^񥉊RzH! G?R!#Ig.-zUtMz,ͣ%7yDpcK+a $P`1|&A`l hB˫Iē\xFaj>1a JHMGg3:D ],YW3[ E.ЋݨM?x4y >Z#LU;D5ЃvPfi%{,m7XP o#JWΛ8#%" .5/ Iql+:n0;[3bmΩ_]%pD/yB;3:AVgua' 1Yk- n}YbtRBw=9Tþ9z.zݫL9qt'KN.ܽ(;3P'ᠻ* V3QN!l&A)S+"lWqu5߆gW,^ХCkE8tHfK?> zJ)S`n'r;•yp.>vy&RO^s| k7W[yo'zjݸ|d ΑMSs)eѲ Qf=Eu{vbG (m>? D'p׻l ZHdЈY!WNQ[")m3{;:jpK )3g- ~͙S4{-x {=ي]A&]Qk&Q㼡Z$H'C.g3ƨ{j0.9.J1o5ELhR /R%Olܘ85^ 8i9vz<}ʕ\W|j/mͦ61 s%OȲhKϝsQr%`zskhgu: AsCh6ZA +C.oo ~:I/VNoWri^4bI,}ۋW__,>{w=,GPpUNЌ#W]&/lvOr}zt*39 *öxvzzjIhPoje EdD磱ҫB2 H,x!XoXv?{>uO[@%j;S+b$ ^kc;2Md(&HΘ!(\4jg42ij&6F j2D&L:`H Gg̚Mhj;8K.)1_y2]Mɗɗ6?zZokYfEgmr$'p) DD !2%@\`0Ap):~0㓿lOv).gJFDsJ3Urh-Oh'GwRm (.7P ‰{:8B]+6?XR)c-i:R\ilbP.QHQ $@Hi;w27}_KyKVP&Zq$VxA1F)fSw1 WXIy7aj*RÊCI"FAvR$6!L @,RpHp$&A [ϿFQ]tEE; U)sѨ 8Ky;8hI:LuYo0Ztcο_WʘKBxGV+{7Ӂ^u':VY괓Z]Wχtcq#a篃ϏJ >G3XPk*qNR=ۯ.w|wx~x>??Ǐ~8{6qv?gxIyh' v6M5Mͷh&{]6&;+CT ϝc9oe]2_5+fЕu?t+(6UEw|RޖJ1>_U(MF 8 Uݭă$T0prBS:3͎ 8BqY9o[8nDR+JO!6Cy>r_'xer$< 8VX#ΉHGi ^z2x5id_g''NPKeayNPCg|tcww gI4\ׁITnj%T㊮u.9+m KY6g" $w O(^`7fI0'PWQNkꍲVG-xK"6ր&DJ !蠵^tӬPi SYI4R fK{b53&db/@/M% KCǭ ϴ&LPU,dSU(PV}*RsmU7SUH %tyWΈ/5(U$b2R &(AsEbH(Z3Ha^nR.K9bHz]ІHH 1A xGz\U-]6'KcVea]VY<@4dԌU|7[z@MQpkCĜ*8!0L+4Iޢ`WvqiDK ˄KpYހJ*2j@ \fd TkՎ" Tx㩎F& Dn--Fb7&~m_g$ pyZe*vK@{ӿ ;YJj7cМk~0*[QBi {sv-+#k c`U}7WYJٚcԖ^eՃ\=L\`5WVݘI~+sZsWo^0W  WvCT.j"B ;D8!3__>3L[f_; (+;#ƄyfG*jܧt{[e9;cnu،N*pYM8ǻ:ﳼ8uxZ]&GP{9l,R+.B+% W'vKg|z{:xjTOw~Q  %WhD"sAX=aBJ!,P\I7diwܐ7E4\b =s|UR\As@ J_V9띡ċrwsL{|ZK6 r<ł3qB<4i{⾿u@uOb2+Hd+ģNQ3";-ggfg3;38PmBt,M.y0ǚ@̟emT^ RpU8 b*K1`/RmAZ}⸺,.5b%uݍ8N_YRfEՏ}Ғn}2jq>:>/U?XöRƋw@cqرjyy9<*% i!ĩ\Q[.7(g8P 84aj]mT?SfUn킉OB\Ϙ 6Nckr N0P)e@p+Dp(jgͰfTBr^ts#l[r%̄d5(:\@ 8F{h4o+Bo_EMiwfJtʠ%-e}BBWș'ٕD@ǝg.99@NKGs,eƗױPHy1o:NΘB" ^ ))1?e1PW)0IUJiC$+-q18.Kʷ %~0 ޻%\-='t2]׳i6 *wM/ah~ޜJop^O'S($/K #P|ڸhq¥/#$l6*" )%$.,(j:P!I'%fTl Ϋ+454rn\.,XoTi1E;2MsxvXlxgK\͵\v;U+EL%5+Ր⯝Yc&6W3wef?L6OHb?<.ǟ[tf]{, b[6'ozQ^7~xӀF0[> zMdB<+2{XΨ3Cwr˦?t<~]yNo ޶LM?Li{!OҚ=mۘu^F8#|֓>Gg jH:Otww'wvN;vѺc'9h(W!L4Rje"fŘZ}gR[F|PD* MVD'3K|\ssy a:iFȲ|R:|D4(bmNKJ8$I`FFVitsL<.Ž4)ʘu Ju}EOEGqv!PI)V}k~X?EOdїuZԬ٦ CL= m .C|3^jC\5OTqg_#[Ɔ(ITBJ$d`FaK`rqUmJQ`SW㇄з Gu.l zqj}]Qݨǫ̫ LҜY.oW\-y{X,V5~D~"u?tat#q0O9i#۠#(JgYFJ^7 ؚ|ra>ޖ٤BdRQE:B3u|֑4j(j&QeC"04ؘգ] }KUb0ga|V;` ٟ|iCI2U$7{٬?\tևϋ@g9ʩ 5ójY݋} #fC""TmTžpSǻc$gD*t~2~y]47MmelOS:!Hx-ξvտ2bbq=&<=3Cxwyd*sS?ݺ圖#t)Yn"5(7A׊3}hBLlRm%GN ?3цG,dgxIOݱ^ރ yfC6(cXx}eR{H3:K+ىE //RshNzd&K(ur@fPZtQmdɣE?hGe 题zDWk?ݚllIcwUcw^m_x\eqW}PʹI] eHk2+b1ZVJPxu>QR?BX[B%ᘂ8w]q:i&O-k;RA!Xh9-!\0Ő8X=5ˡx c qi}ۛիcW3^am9&$()XnByRG$I$I1QkZu g }+rt]Ll[:lW!]dXQj2Zruf. Ơںm 2ib!O.F:te,D$PX)/xtE-n9w\ SttNRmdz0)X ebSJ#5ĹiHxkSɰԪ M\M5u ;\Ո=ES$Mi[iӓQFu3'PR v= uŞä.5eMgEۊp!Rɫ$}.hBsJ"'9!Ddc@G 9܊ YV;ZjW10ђ a0PDB=!5R+.P\z؉u=v={#XSL6}D\imc0'`N5K)O`dT([I\* (d୩Qвڂӆ@Jp2HZ3kWI)*򝾌8úB:66 jAv$F!#q$0!$̓8;&eH;ɇuHBr1h_J}z'G M\9i07ʋa6q.f_羫at$\^=.{k FYJ3j>d( 9Њs4lW~x{6:-gM^%+ w@R fW)f7 yy38RyKm"ĞSPĕ%UJMwդQQqHh3ШvhH>^^q[LPT-tSseMp_1s>Fi8\-p~Ί|tҍV;8= nX}7{),p!FK.ˉ\ Ymu9Ⱥ^ƪ)IXܱDq< W僫>=[|Q2hKox~{s7?;.Rn g5,(B?ƀ|۟ѵmko57Aה9j›rM~KvV{k8Q`r@2<-KvMteu=JoQubq&2BEǻ\i&C 1!-]0.efM;Em_cIBطX}.bxq$AWKkITH*spV )EI9y`dpUu1&yڻ\rʸPxtg2s$Vx&'`K(*FosV&hj$%. eevl߶B:PCŁ`,zr;D"ʇ3rK9ʹP0pw#QvUz5G_goYgh0 =渂MAIg%vY^(/v9RQ5yLjChȢQVMtNx\b )"jZ䫻2non릹tt}n8p2s¯A7Om-hӇbS]Q%<2^dV +dK2EQH(Z!jSo_M揧s<|LFOhѾ?Vx=5vrsۻO./׈>ǽy"QQe]c 7wnumH;<ӖjE(!m"՞D*a5:Zk^OD@jZyI"F1658%HVԴğ*MۭzlGǷ[a>'/ R²A$LAK+ ː̂,fi$dIJ թ siG/q!r&uJ3^yZ,:l}*eehT+raKu:Ǹ/iT/0s`r3ȫ{W%wO>_/w~Cb,3DhuaM7D%b blOX鲣pQ!Tgs`V6E y\08n_N#]`׶ 4׋z.bjt~ϓh ,K^![[;]p:0$K5>lKw`|.&s%UPjE j %xw|x<+뜧ת]_ FhuAܓf"NKp|XҦZ7NJTyE&璕Ӟ7SȠ :@S`̯ûdĪϤ&qQ#k3I&<,$2RLNㆌbʒgzq_gF%kͭ}yYt۪vAۓgЋ"14qr6(ԄI^,F2>jCgvh_2sE,%8Xm-"r°r, rA2wao9O˦H5*gIzBpvojVn}ePz]pBp(&,f\K!O/' B$e㈒Y1,N`ru6' TɩB8n!:{Pt9 {/4Why/zo5@r<0!ɔndT2 0[Y~[B.mQ8 -ky2&W4r13lp%'r)^#9S*kA+:m>gp e9 )+(,%˒/AtU38O~{/RE…q *Ƌ(B{i-WFZV@NjVg 1$9ڕN8[R*!0r32Υ"@ɑm*ew(os2+εIkf ؝v63B1 wʅKDԶ5IC_llz~ET?g/FŒpM\nCIIɝ kQ|HT:eYdꔱG6`2!ѦD0*d vRsaѺ383v_/ؖcvkC3$#gEHgȐdK WF}E4wKp~Ab2aȾDv ]zp缓&n}7ǝ}nWߴBhŕZy@\y3\N{/}wSFio-ܖ_p6>'KW˓Bn?%CYƞ]h3=i ]S-#67SWǕ)ћ)iuItG܊T4hjHSCVj45t-5tlPSsLtѕcJjS ]!\\-thAӕcTw+`N ە}+ڮth^ځrOz^@z6^@z6՞6Bk]m /rЕj j9+ճ;T~6.rӚN/ߟYյvy Ϧ [hP`D&GO?~j4&sotEԨ: UMSȱs45BӄXiQZ>@iygg-/vGSRח5v1h|Z3r<2]i)/FkR*f6:/Giabǽ"nИ_6fEG;kKƱUosHM?]]N,7gqµ_nғ~7G1,%YjjDyb.y'|kA.GV,H)C<|*@ЗQ"" ()˙l'Svl7)ն;mlE 6V\nkh}h4ChCEtAc'Bpk+D+wݹϧ+D,J ʹ0c/FA-tEh;]J ] ]iaI"­xЊ+R!ҕ/<5"N% NSHWV"++ BZJw"tutSzA1#Zkr}>]J:@vE_]\Sкޟ J+xk w:=Sc{JʾIW]@W[=H߀?N~%NSq#=\I"FRjh;fh%Un@H2Ny7(Лy [J[c?<O"+oV"Gg6L=,jYk~9/l7vO\V2s3f\M73a sOޞ ?RKs{0Nq77+j:˗?i-]oC\iNڶ}-s{7‌wVVƉ>~\ANZv_/{s~] 0vw.'g.C)iC(S b~oYwi3Rg ;ȴl-WU΋vՅBH*!cx4?E:]xef%~(n#`jUq 3Mvq^lo|}X/epMx\bs̈́vZ|c U-OMyv pK#YKy&E-cRy] c,#0BE-NqVN}砻֣S ]כV1\Ǜ>-tX`fA+/uf4J ^mm%[hD!q3 4&e>!/dx>2uj 1x>p//efUH6̲pwaHYVma0?0&R9a`yA9t -aJ'LR-ϕ0.%:-%:W`i+˜RY CၱkT V0brǜ11":‘Kv\$E e3i9=K"SH81F: lp6HF <Xe;t5H [!!SnNCX5% MtXn)v0usWRbL([)"H1Ĩ(cg^{>2%OĽڡ<_]1'yE؞Ǔ!l:ɻ,y9SlPk:#݇is\UXYu'WS[a9չ8cChHJNoclQ\lvO+83v\ˬExB{Ld,:drb_+N퓟00[Zz Kgw\iưbt$P{(NMtVU45igau] vVյWU a>Y A0?LL*Mo'azA4gMnwj nڸlWޖpJhnɖC:Ni}`&wVAug|bM:Q}@݊!q1!#hCjuh_7Գ&p]u'q%: f3n`UѶ'!0H3EHۦϺaYǺ)MvF[otTjcw'wڃˠGټR ٯS@~azنVڤMDzɟtgފ s(,X ABY"|+i(XȽJg-;3F,B0#BXYL^jʈhA #(H8Hwg쌝ؕMqd:p[mJt͎ Q4`iwmX 85v{w>x<&=K1R~< yJVAT lQN܈ JrlKꘪa'-Bc,b|6 g AyfdVɹ ԡQܳHBr˸bA#`NJ )Y~B鷇ҒޔɻU"Ze;xz;G)\Yw4E G^: ƭp j(1#e?;ly>7d4Ș@`aN ϔMh0( W)Q!Rly:<]7njPl3[O^Q0^GTo1TYg㼒$0N.Zߙ֯매DRMעEg?p39E]#`s8K]u$Fwp y8 9eD1tĠUu*1z!uس?&u:&9'U_7jw} La7h=}=1*yĒH֢~U4Ug2{ bzhn->K~|u uH=>4mv |؜7m'WXGfmrŗ׳q}UTn#$6> Rr`QgI"y_E EIAMmJ!B M6dq:ήze"wRkf料*&ٿ}9Ӳ>,ras/SJSm1C-Hm!(=.CF#ņQbfK:+aD~FQjp; ǪQì.|qqA"Zԩ͞}o۲!>$qܦ) D.wWEpV>2Oi;&n,ԐsIlqM#txS[8n,U7MHTs@גWBLꊦGăeOLt8Glج~Q߽񬊞f; JD;ǵӞQpYxlm&L82B"$9I 5xI< tcxa6|k qw&!ntp~' `lء_xRmxҹB*>UtV'#S\2B0r+o]c?i3\1幈ԜjӮ&ԪG^>G"f+C02Fro<0ziЂZ)RJR=ȫ`Q ? >F a"wVַY}}7u㨘ԂTl" 1zOHJnLXUP#a0]gi k4/sk3P/(ީ~"h3aWCcF1_|6\53B9YFq,^.>S2ĩ8NK{H5TK(` Qb,8IRF1`ՑR{ HXGgObS&aJ;f2Rʃp-aó^#:I;c@`'㊂.O 5}Ե}uwE~͑q˲lP4st$24f@hxYBHĨFD lÅ%عm7lD]xGd4:&8fEi%1"Ay~Tx=T+ EX|&UrYYo4RjX%EA ;Jj=b^h$ lI*R.pg/o wZ[:0Ui-%JI,}$ Y:V>oĠc}T';J佊Li!ho zd#W ^9 .7ſjUE|8u$ټHA lf iK}OI27)PfojhT1B-Zmc"vO _}*02`)" ]IA)@)pmGrFF:5 $+:mqkrTa;B&dߊ( sl|ɊP߅lb*CROߞϮ').Pj]69ˡMos+ە+UIƘ٢y Uŝ/A}sA]2sm> 8^ sKjn$ jOdȋS,'woB-=Q[{b|{Omݐnv h]X^"P+>Uj\n.W}]Wv ׶JjJ9K%دHpp6^ܱc *@&gAlX,ngn88~˿]޾[˿|*CG1?{]OZw5tM;kLu=+rKvx+@;t˽R J/^ҥ&:j&ar.|GEwIIT\}+Ij{p׾2! | ծBlzIqOV3DBZ ο\(OozE5e:,yIOphL^^ibk^Mv2kL䈒8X  K爄8x'QEdGwmI  \@` cw_,)Q8!) EZ=0 ɜNuwUO\v9C2^d~HoOnCvwNEo rj_}~byfGx}/%kq]rG/_14ͤL[jAoMT`WgZ[|zՂnPrzV oGu;,r~ݎ/Z.?ony<(cÉʒBj( %@Y^4eo 19Ih"G -I.L N瞴x\JMSjt}4yK~_w:dZ?>^ќG[o-濯?\O?^oq/rw'>QQEPQe[c~ ;-VGWnJ8e@mq6L'tp^ȥ5ݥj钨g^cW)[I|c *eY 5sQd?p^hv;^O) sE I%i\a+C:[|>.Xez],&  ~_6bL &\0U"'Kϋe{l˧TcOIΘg-iWSsB]~(՗v__}!`%OU ՑWk L&> 1Ɔhhы"8;'jPOm b4)(V ube(1.*zea)xT,6۞z}ULM_v`@^#E` ++#B"IBj!GBrl䧀ՂP@QJdȺVD,`pR6:Qb*Q-7qv[F?+t:/ےV־rKLƒOCdB珗\#};ebI8}tgdLb>}p /ـ ɲVIMQV{e3 ב]U7Ϥ&CIT:9Lr% (&L&eRjCjD!l}Njxz}uY7JˢR#J8/}fR+C|9)@JdteZx5`/xRľG+P@~ͧطyCBAXu7˯_MH]@&M/[p Jץt?P7dkPh5f–'GN6ΝD 3@2JŢM",|ٱ%RepdUޣ5zu}!ggĜ)@Ҙ,U)/R$C:%Iͪӛ8όޞPN*Q!Q4lMY*X3LT '_P iH^+Jhj^ 5+f8-`̶濭qtWj#D =jo6S73vK!ZO'{E~0z=}jobh]kGY'-2=\ŵTKegQywj-Ic\bL"IʀRsRxˊgqc VbȮWdtKJH4#'z|ѧj!(\IiM( R}#coF|ް7 g, WնgjHvm7۸80L?N_FlH(VV4>}q0Rd`(=oV;M:ڦN"]yds BRB)*ljgș&_8Ja6JED=#voFl^!橠voڱ/jcϨ jSS:4Hl PP.$gbGD=Ci8:{ƚEt*+\ r@$"ه\&n4ɔ5Sx7TDLv!F:xib=U7R_<;2Pm0SZMKEZ\Fp=kEIJ T1fhQ >Sɳd ​GS޴c_<=GU-Tzn#w69 i$!GcX5̫vL_1k{G=Yo踷@7&j(jD C!TfryAiR={U:!9695"9rZH/J|=oXvB8f QcIkXtgJ' E)Eҗ8u:%&=Vlmғ onY3o滽-EB9i;?:y\q}Iz+gqڙuO{Y#.Eڡ\?\t-o.'?1|5}:L`G픵û`y7kVJ9LJ,J]9YY;=gw\|NruB)E9 _d[]$/]'%e_aoavz=z}>]u_zq]c._Q3m<1|sFaoW]H~84DM`\_.s+i dygB5&g<h60(JʘXVO=\,peۢvC4O&3+Cp vfKe?7R+1BT5@ӊduD< R\7)Vԣq_Z LkV<BB„%p9K>8!i%,a9 *R6J%PF C% Ǜ5Y?yʡB a mU?*w粷UEy{[8m}{[N: ,;J{.pUእTp •d9Tvp6pUu.pҢPWUJ t X`oUם \UiJvo2ʜ\ʶ<\Uy6pҪӇ+Ry\umfʵ2vMZZY78LjCN{K@ٖGt[ֳ*f y 7w<quOpsmIs;nz9c߃ڲ-# 3b q<V9PeN{=2-s>ڿ}VBs % mnW=Z3 ".iS.6̩{lU#V0r/xޣ[Xi*~ZR]uΓ;<_VbdMXY/BҁNʤH:B㴠,I;-tQm@{0iC1WOb_3.oMY~?YNf/ZZn)JQ%4 yxALte#&+HJ,ֲfkC"Y܅2DEHB;޹(*ߘ)^$c'a[gK47JS M.d|."B8Pbog E$)rQ[5Kq}JGQrׂFs9N٣ڦKX)|T o*6jbOfofiB MjiV3ti(yk.f!JI֤1DKɻ"hlШ.)-Mk$%$=1l5 mfmRC$Qblim[J_"wfgWr $zt vG%PCLB̏,CRE%:XˉQ2JeͿ=(!AVҋL8B xk%< (A J J ®^D#J[wRrć6l<įxJRۤE,޵,qd_ƣ)0DhaK\h1vg%+U$@I-}'3o{nVh+\.`5 j%6 Bb1C6WKr5j6|ŢKS}hk§8ݧ^DR`w"RV@FY)dBmF& ٥f#oG0L3R/3 E Uԛk\4`1[Wl< Umt۬8h=: G %j=PyU벫ԕA[4AAzd&{_ h hM e*3[!(@HqA(x3jώp`FTݣ.RV7`3/oHH&9%R,%EࠄjL YV+"1۽@VQ=r+Z-߇|p.#hf!:38o#aP.ir@wvՋq)b֛䤡bLbbtNR'D̿h!`V9vV_twg" ~W׮z|ou_ L mZI|0A7 #7B>n:@GVZLAG=$]I!iZTUFB1L: !'`GeF:#.3(Zi8T I"9d^U0P>xmBV5 eqy{'*APP#HA@8_=UUBN5~d}%TU;+Q"C M2f-B&F#%DB>h7Ajy1j"!%:]%@_>cLG>k ) 15IYf}Jh v% Aڱ", "+P(v75$BjF,uݷtt G3 EȚB\"(Ψ$1S]`#h4$?<(Bjw7qVUp*XT**BȲI|$, cC9QbզX5Zl@VHGYtg#MGHWPD[ jzi[ #`!-AmU($`F}W|ZR0=UK mry3Xqmm^-\vcu7<:]^:LA[.!loNN44z63 Z >;-,F ݚg)DP''ޮ Ę#@99Kͣa#[*3bT2j7LJȀ-Cۚb樇 tyB1C[uŹEYf1uA;(XfhTYڂDe{!kQƮXBĿfV$⤩Mk/ X`G] +U@ &ezBE5FHmr3u#nrvs\:&X:W (]IڈJ5(Z54)hc37 YHkԬUP%H:'e2 b2MV hs@:'/7y~5^hԡDlajl}5zC-*mP zx3XAVZSH`tEiaӠ/B\= M F:`Grsm֍\4*fX. b!C1 PVR"T$fBN Zf !Kk|A9oU =]шw9L0H5oz6.凭y^EE A:R љN}7oޟ@o&dAY.l A;Ni7w ;'I#l33lZ9ؚ62:}@b> }@b> }@b> }@b> }@b> }@bgu}e7fwYʵt8>8le> }@b> }@b> }@b> }@b> }@b> @}@!܌|@؆s"ѳ}@2}@b> }@b> }@b> }@b> }@b> }@b>/rܜ|@gB*l|@@}@@d |> }@b> }@b> }@b> }@b> }@b> `|@nQћ jJ6׷?]ZW7eMmh|lK+llK@mK@iضؖԖCySj'R=np/]n(n[S;ЕbzG =5c' .>6;z&=Y|&~{B؛S aO>zgo_뛗=dDQ7F*vKT3ilhQihKK4LӇHE?ӳ0싫tQ!-i_ctŀc~}an]mZW?#~Yl6/mŽ=zNÂa4X^]W/5hwgzI.vJ-؂^/_N߽O+wt|t_1|GowM3E.^"Qٷ=zo!~QgD1BvwYElʒ3BhQJ*\AVyr5i^tZܙ]S;2De:)+SEC|*fS`^ʨ9F[mĜ ճ+\NeLWHW#椮1"A DW@׋ e`:DNK;'uS+\0u"V2] ]9G1̈]0"q Pja &̩`klqtN*@ӄUCD磮n ]|}>]JҘ=$.΅mJ-x#×CWzˡ‡0 pw\}Wb_*LtA @넜n k=Y7nPt ])ĞW1Z#JcDtut$4#`-lԸkn(Md:@2Q8agDW8̧\/fCWv_\x+IZψsdV_//\/ы<{37ݯs֋hjegr ]Lt$Z٧Oo.N;ru:tϯtS7P˾~zBNsRe.((ct}/jqk2cy VѯW=MwƞQ0 {٥]xqѮ0,@D<jݯ;ߛ'u@ëh'']]Zp~zMG {@C0R\8c#wO%3NIoփk"h$ ΂X.{W'W.W=w}u\m׋o uFdQ/OEX9쭦lvS[^]/=-];o>9M @ޝv1^D/ >6 #޳KyM4JTCN?um> Pi/;TlVnw?6_mkt\$ z/mX#vXfh=z<-³FUIܕFRT8)佊*P[(}v}ba~HBwiw@*jZA䇥ItI}M@uWqyx6t7NC?uЃwmOkp7so%tn w/ĕru/kd=U-Vs)yi %{pS?ʖ%oV^IJ_,%+"hXxR&Z_r.5+o#r.&Z+Mޖީ'&؉=૕F;PϬF$aSWE_騣O}h#π}_xz~L@25aNh{# 1_(cmȻqtܰ+vBݞv2L0]Y'}tхi"au Jon=;b>4Mplh0u&&0M MK eΧ=X-Jju]tgwC~Xb,q6:RJ5 ΈV\m2\0vNej6pw=X+F6Rbf1; bIK"vvUsKmUHlA:[JI*'Ҳ޷Hmn?~r֩rup^ߎVʾR,WC+!HwLFW\!-}+d: udN++ĕɸ+jv6y Qĉ` \!Ur!zdygrT6`Hqe2 jwBJó\ P4Ԥ$WX: qy23k]HdʕUT`yI+`h2rZ"W@/!rBejJ(!\Z+`;\qU;Zݑ\[0Zȕr欷6P2"ꎫ;AVܻt1$$lIFWTdiL#UY(ӸQ@/%ư+%͕[$LBytȥAu6yi)KZV*6:%gyc{r J壶9)u ߽kn250M E4LSJx\T"#TLf ׂP\!^98Iv5dz%gY(WBJ)cr\!g.WHirJjۜkǝKW& "\!Y(W0ER XW& "}+Թjr"`-MFT i/ݓy\!%K KHtAe6BZ{wf\YMIXd qIE (9}W_\#^-D0e]ϻj;^׎VveFu Yޞ/tDwLƠ2YI.;>TY-Yu VywhiQ 5J&v>n3Jw߀e0@,!`E sއ9H)\ Q% u: q MEʾPr"\ P$FЄ -d:VRjrV$`5MF$ jn; R4Đ q p wBJe\ P "A x> wɴո|6JL#4PL#L#Y(Ӛ!z=< ]btL6s(aߑ0$rA"kS9G]$j})Jot5,%} HH C FƼjZ%tZMz"ź ypKyƻoݻwKvq8fu#^w(P17{-ϫ~‚}P0 |pC7þK / 9)ޙ/6/+`~?}G]_~KͶ.F㴮p7ygA-ɪxַԃtQJ׶䬾Ylf{t0x9["ħۙ( )];&]sJ=e0H/ Kk"& VT\imzjc3sx c(aum5P}Y<}Lvfaq& c0x%\GT8dkq:Gj֯濧麟fc`sS(ć]ѫrqW7j io>/ࣃJ}i|r4<~mny퓛BSz}toR_a\Um2yV1^ry/q?6?r+*_E#} bnh!{2nlᎣPn&y wudFc,gfsO}0PO7[xxVﲹ{/[ Ղ~l7JxdѼSO͞ػG{WuR? ^HaRp׵e%7 `G;FN"::*=^3g{^}?qջ`R?"۝_ z {Y:^K ~tY;˃ҥMe<(%i!++NMxЖ *ʬbN <1UĶ"ϯ5#}f.Kצa8z>q~?\>۳`MYrUt0P)*J&=e7gҕ2)Wޙpr[I/=/A+5j[g U2uCQ^(Á߬]Qhfef_0 ma: a^Ex[y&V jBC)Dt(9si[2bi[ttqRヤ͜6{q1C5$}TeC<wNYzmRG,KWDї΁jQqC:Vc+0KmК3q5-.1x*'lsu8= m4įs'ꃗ !m{|qR8+ϞeيjlsõCבr%ʪQYAÕwTy {KQѪ䥇\yRyk0SSķ}U,Wg^㛎G;On9~A!M).sLNF!: (qESt8Bc$ޔLo^gDxpPJK\ G.%)Y=fts|Fv{,/z֥Fczh[욾hoQGHa\(Lu& x $!4B2)^*J ܢl.3iMsVLghl^W,w+es5+k%ajgUPBs%GY!2Υܺ@BdU3+VeZi«s uw ]P; ιduPWK-fGSg#O1=J1gD(e!EU%WU*Å2K.^3%ev S1KaS҂RioնRRJKW YŨd 16P ]YZ*D B`uEyrgetԙp\|w(NJs<7Z*+2I2+jF[i _0 ]vW5+~:.S?{UP[y=Aw?.淋4P{~'~SmU!p׹ |--q +Mxwӷ≈MΓ(vc7#?~3S|{x+7^sa2.peWzm/|7_osĝW.6 G j_+ ȯs}Wn^ K~KvM7kck6>]{:iJY6u8nD2!.lS}mM3s!v{ݤ8cyY(ZoDHQg jHںh :L{7 6/&KoZw/)P%3RÜu(S2*[9W%k hVuXbۮtzQMkPw2<Bv1fWKS[FV*f"Rђ .L*8YRVK^st8<EJY0\OLNGe\NKpI%#mATX {) SgmAyYOPId~siN1p5onv'ѕh?]ብ4@3mY*oKK4xqDKCdRQ !qPl|앞<_x]9\P@yrKUh?{W6_6yC&Ld Û,KI_5IEYXI2bW:H3( G)Q!R~?Uԅ{F=Lȷ!ձւ"JPF]~5}"*>fi</,Ԫfq8λ㋉=v,fㆾ!c5d.u}Wa8/e(3(Q i,rPi}EPِMLomh݈?`,Ȓ2ژhN%dN%wbJ ŠE_)?թw.bF8EC>$pimlЌ.?\"-_29WK{'kfb$T&<'' IzTIvvuIm5 Q*ybbc>*ILꞹ0]q:ăN?фGlY,d_f@t+yflEoyi}(8,Slm&L82B[æ4@X"\'@?SIAC "C{ !. Fy @|jړh"?hFy> үtqō6vqW%|V%]]%VA\x.d(RwImA)@_#Cr+=d `V(1sƴΙ .ט\DfPjڬdMqYӄZV+hC1c^1{Ü穀j.ҠS ${dW:X #gl"K뻭>r-l8*&>%۸l'$R& QQR"GX)/Lk,D]c^ {C$|k;uڮdbivr`Vov6ϙG!לpQo&e-v8uB![q@84+ñⶢޥ;V%Sf!9I!acL:R*!{0eFDEb (W$6!`@Ş2 ǨU1kQRk Y8z`#4Q :lE@ {̌%=m <;"SitL*" 3" K`;?^5B8֨"~N=_I1-y8|.*$I$V.J06.<=IQg讫azD?'$Oh<Dz YH5ʪ,c@r _s)(S،FnGcWǡ_ΊN[lB\:$UXM|N麳;??K(Egv`|s\Z%XOx$:9錯;r @8}:v]SNǿhq'xht0?. =Yw17%9VA?tg^|*ٿg0ً^}xX@1א ڙv#@yMM Mm05&M>u+|yhmtk@~~|y/#LcXELkUM+L.;tG(y*I\=+_b; P"ZȼLm4 ;k2$ l"!-8l%5ɛZ3)X[ΒSCrMKxZ9!`rDI9< qN8cԩ##?wJilOl$sI20ޖAws<ʵw٤_(j;d^Oz7m36خK *`k S;ĬW@Gjd^ ẃe5X1VZ_\gnEЇMz6dA>|Kz3 4peE.ܻ^}n_5"Ѿ*h5- - ^f/k 6(+_T ZF $*m!#A*N$FCBeTSgb 1L"V !k"}$ EM{W=*%Gk30]8+N-w ?.߃8Ma7rݨrNnT =7OL Y@K:z93{wwNJdwFp@Ep ԗ[šե`n,y,Ϧ ͕E ?I׋azޜWT " ;, Jc0_T7RA$g{} ?ytTYwzIЌ.?dbJ EgaJeBZE88Ii_H+6gJ b̂*}:I< QQ"G С bY*`!CM=uXR*DΥbדcmxR)dZd%+) Ngl&㞱ViJ3/4^KT"-Iͫ>O˸ l6=vKh}bɱZKBAdfrC&e]t$kt셤&RZUGX OX83-15yc" 1ⵛiǡ^7ڮ{`wiMV ֍$L"UjAqUh;%r2I)X aDDȾ&eE}(D:chL?ⴶл8GlS{w9ڨZ:OR-E%`$JεRICۘ;)E85NT?Cl3]I+@PB+Z7m%IWMڸLK^/vMjEI8PA\ 1F+=x3Ev*iiǡ4w]WQR2k^%& ڍiu'~.h}쮹f/vgAcB z˳A@ M6 SҴjrP!?'?YRD3rA'OV:ʀHRYYV )dFl|Rrq!'Q!,}i N!*12ǜs(btBТE>ji%R[iKoW KW).⯅G[ݡ~<vB%;۾;۾;۾;۾`gw}g8AuƋζlζlζlζlζlζlmmmmkw}gw}gwmmbgw}gw}gw}gۿțζlζlζlζly}dZ*a's/R}CU}ތfŞC{=ǰU8+&G9k/?˫E~q~ӿ=MZw~~9IoUȗldVhʌrd8ٸ~#y CtQwB1E9 _{BCcW{|?W|=-{IvfG߰v7G㷿ofcn~`6WgsyX?|يwhVx=*͎ofOŵ5@yj8>SjT#謄JE p$9*N+<$ȩU0~:?嫨8X76POr dJ)&JrI:scKMcKcV9&Be BKd XOlL=()C'TtJoy}9-,*ixMy|ޝo}W͸S~$2sՙ} jݒq|FuQ>Lj@kIN"X4ŐrQ%N6"ZC0 $۫A!DrYfqS`hbjBѠ"X )@8]X4$ A M,4/z!YtPi(TUZ腶1D5UYKYWgΚv`fRT;_rɚxɠ kgWj7 Jk e1kvhi v{)S€  41TH1`HI'TAq U$`/#(ՃvP+^MIU^MG֑(]xByTԏ].GIXDr,L6zKG1tвf2e!xe:*%\ A:# )'Nq=!%evB%O68nmvDXE|EOD39\cõa%"3m%"D@}` FVo<ژ5Tݩfru>la MЫe]Xc8N!8dذ'#r2qF)/G HOi\|Ore`z/$Zdho4@v!A CIQOb cY5\y=hdvu1.kuvナt/KnĎ45Kx_aL&M%2 Y _OVSXMސĔU G[6 mBRCP:w%G^RQ7o$^{QW/\?^l]U>rwxt{w۳/lFNr٢1]-`zon-#ھvBHKOl:eH!/:]4WRjUOVP;9`(ëPx,u|_6awMivqر`yy9zFk&TNFXvg#`?xe;}<+O ՖytqБ(],ߎ/&$ z2`E!X]E1`}̈́wОn3RGY[.DPQEUتjv!sDIz&Cw<+ہ C- lά+[! ALrջۻeUA4 >Kɥ8 el]'/,A:B$p4yX6+.{c\PiB)ڠ3[S9')@%@CԞm2:xCxh /gr)0•uDſ=pN]̾ A>hx $Uu5۳JE/"f?TNO?_记<ʋNE˱ge,q@J|+im]~Ӂw1dJ! 4tA@{qw|?ò)ُ fh?Ӳǹ$buS%Tヱ5V/{~b!:'UaԔK` 2 ߳|ҭGH؈qb_Q5B*0p1܊'jo0vڪ[)]֞nFFȝ-l#UqxjX86SKضG_Gѻoy|c/Ps6ȁGnZ.]; ~{wosu7 yvko꫏x *}c{O{ ݗXm7ǯ7cs"]}B;m W3ʼ!f!x-=C3Jya{|==Cq5?b¸Y(.: T@)A:4@M?&J PRac?ub~vv~yMsX$sNJ#)S%#eSQY #}}E{16qF0t YlJOgÁm0~Vžuۍ ӫAfDe{)Ի)i76 q-Q3 D FK)FM93(: ="}~xD%BS oI0(ό Bj#97(:y {ZHn7Rl@2h ,Qydcr1E4^JJ@Q%!wK9 Q Ƶt߲ޞ'$^CW~nh_>kQ:]pK7=f us P ȨcN! lW2E 4[a̺ -xv_pH`K(j[/OH-6xbVZaMHJ8Ho32<֊2F1?$G%Nh y`ȹ,#1C&X:1z!u{E8gx=>{=ԥ{؍2Z{&ͰX9!ՙւK2RߗWFٛ0.ܪd8p7x#tOn^x=sAzq[=ӒA~C&hf@M#J|ِ%E1Gjul Kp>)iV\AT^l[ϳ ]}f -~Bq2?!RAgGl- Q=$? O9]7Bӯ{T=a4]\I.?s<{s5.tȻb3z5K\1fz)ڧl]d9ݸTO`M2-lÉ) Txqe-KcהḢf~y)TM-kMk+ܸ"[uɔ6oTTd-C{9>TﺠH `J b:HxUHu:2[5尙dֻqݛ0s5^]cJkGrnϙ( %`lc;{0lvGQX#IW \[8Lc8%˚ijf'#e̸5t9tb m7r>7eg#/,=\tAbTVRmƊG V>S8&8 AFܥV9ΣUh4BkM)(m0*GTYJBD2-spk2YOg"69OھZT[T%o+:4K1Gs./= CX_O>EǸJ[P0y LqSnFtny$TS)Oû^?V_k81A0ɵ Io8h O=cF@SU NVFy㤑➀Is=.&t X} Q4S{~*>/ղmwPBԫǽM&ޓ5ɬ?]0]Y0nxVYxJhob~x?ׄ6=ք\Npb-eǝmF7Kt/,R}Ȣ90,u ا/>xy=\(P\H0ۗM'!v%l:wՀ(Acъm ! D:XaYO" <[1<|*m>eX;R")9S*"Q* 'Jb,#I EԱ/'|<ݕˤ'P.nf+UTH!S, *`iL$TR멄oI-1(F"ܑrXťF҆0Ы ;%ZJREga%#.deF:$c`=c!Q"UdJj!4hA0CNpF,A8߻T("GnÐM 1Ґg*, gv9@$Ȗd#;>/@PU[JȀ-\:$U=]!zd^;DEaHX\V秉g&Aw)IJ/ߪ It~.6qr)ؗ]G"&яV/ie\{:m?ؘoDNu+T|*{꛶ 5ˡeﮭn<|dN2}$RI5ECW|lF@S-jvG%PFZl3qŦ1I}>-ч-A1knw}vPh[5STS:9gˉľ(]9.l4@BVNibTNC*NCá4dd}!$; NØt0tp=bi (zt{@tŀN[wb`|6}+F+sˡ7c WlUfG uŋwU^YAPHUg_,ޝ7QgRVզ*>%oˏhWwZRV*=,G -軡Z 6wh#t1ώ^}q[9b!1_T?EGW-W86N %Sܫ1Qp}?[~-' x?nC>FW{{?磷o,otyw_V&k㖟WFÝm|7 AY*ކ4Ԓ)deGMacpH lᚃhM^0JOR<Š7{s0tpVmNW2E@Wf2)Ѝt墻ϫӎ8Jg6}b0m15Iۤ3yZw_#L_dѷ/>R~?mɻ S:"]nQF]T\~yqW?ףSGD}0'vWouGdն ^[ȿ 骭"pdPQfhaޡ[&f~`N}%TC@17I%Xw䡊E;[EEI6EL~N;"DKԩQ*#ik6hFKahMQr\ϰ-6RJd͛PTS4sC> 4o,mfQrH sE-b& j 0kD3$p' jhhUF-9+ZgpCDkMܽkzhnMZހ{І0Pd !LCt(.e`0 36MB11ИUr#Q6{WߨGJ<D^WT;~u&bTG`:SAd*XMw¯jjCާ$ͳ0?oZϩTRnm5(E5ɍK6J= 9 mBeLHnNs?YbZ)ZBQCBN# -U ;L`J"s ]Vz$ZGviEVeBф ih lt'  EA)1أԡti ?GSVY BlKo*AJ<,dBuXǖژ[CQLSXе;K: nПmL=VFUa+\@R7f=ak;80Tkt dovHCAzC(#Z{A)9x X5RD`ܝV &E0W  M .iN +;.Etփ j*6j|#rd2 VS4dW"1w|"7CA,ȸCLAA&󭋠@zN!a@YǗ@E@iXZW X)[K1 : s! Db\4l=x_Q>VqC hA}p08?6h/z .USW̆MHNcEUzP];Q_r{0g|-EҜ%l0/\qjh}0W/pg']OXkZ>9_p$;X x@ ef՞5`6u˿wΪ4QG]Gj]kIQ3󬑲FhQAi73zlz4\}YeF٤#väDy ؓZN!F _EtY'e\0ݩ"SAE R,@z|PCzk:T}f=. >[/`E^1H"^uN mQ'w d!g1Cʟu7(oV1Z8\ 1\U ɖn(.FR#xL 7mYנ\ScQf1Iu'zhN hc*fnQvZHk֬UVm(ΤL! BVx tM%=ݳ4kPqab2|śk΁ lK}4 ܅Ы9ڡ*f`)o"aвf@2~/VO 4%鰀d*AQCIJ’ #iYw 4z> PbH|@$> H|@$> H|@$> H|@$> H|@$> H|@X}@P|@.ؙxgy23-z\|@$> H|@$> H|@$> H|@$> H|@$> H|@$> :}@$7q=FP GH|@$> H|@$> H|@$> H|@$> H|@$> H|@(i!lpC<PQ蓡Onª'a7l'=?ɳ_ggoOV>xyٖW՚*^`Y?e ,*, R+CLg矻\}FO';*%1=ILObzӓ$'1=ILObzӓ$'1=ILObzӓ$'1=ILObzӓ$'1=ILObzӓ$'1=ILObzӓӍ7cjʭ^m[7? N׻~J{nBW@kޯ]P ]=FJdAEW 7C+=b]in9TI(<8] pzh\xnhuzJڳv]ծC4^_ fWxw&lB>_gcnfYPUg_,ޝ7_cp6qKm<>'ܧ֙IEw׫U/uc ~C(v.P`SL_fR,km69-ˉMcY/EއeY_tw9I?tЏW15ͤTSv4]-ݷn]_Q?tޯ풷<_T?" >}\_(݃res9Z;g4@ח NRAo_i;Xuk*^/T9MM F-(nмV b5J70%n`A@+Fx6RtŴ&4)wRD-iT+uJKBbJ[)AQxlbtŸRtŴsS224b 0_] gլSe^j 2\^ۡUMm-4ϫeTv;m*kTWEjCƧw>)1 b29NJ&Ǵ_&39;K&7LP4`+GWQֻuŔOQWVǘv voA+%EWLҗq) | p+5FO֛`4&M1p+N+Eau;5J8ș4ŸQi\u U0Z+ubڙ6f?vEZŢ *c+(FWŌ]-짤3.z9GV=+|$7Viad]%F=P{%d(& Zo?03~Wk<I(> Mp/FӌkM3>UfJi]<)|䢓u ߲$ + )!`c9ki]̽dܶ/AOcI> btŸE']1~T)&+k`yIbtŸALtEAe+etbB9lr or+f0i+4uAꊁ0`_"\;i)&+$]17btŸALtENg]1en[]DWA4qHFPuN+ubtŴc'Dr[^Rt ]GV;V'q84\?D4ڱ&QҕKЕ+zjkF,+Ti5x Z$+iV03iA-sҶ ZÓTKJ<%[Ab29uAJ&Ǵ1e%`&AA"`=i fi:8tue1G+'b\)2g+tEWSԕ `d5n+cm!F j€p9btŸVLtŴN+ʅ`Q]1اu+,$u ]nT(EWL dJS)*D+NBD]1mCL?BoUWߠ䋺J5#ϻJ0(]f EWOz ِ-_g5*'?&Բib4͸V5͔AMOQ.Md9?rɺLeIz}뽔iɽ$ʰYT?G&Tt.z1>ӎ S%G, hAxډQ,rSB++F9z#EWL!w]=Y'HW+5NO2]MQW{+r+ bA*ELˌ)*V8șɸQL2HbJSƮd șVe+,_#>+ Ì$`x. WquFk8JmtZtp;z kc-F$+eZ >%03-s7$R˙ВTԬZ:Y\`+'/w6߾/OׄoSZ~5{i::y_`}Ҁ$q=HIB6ܓP *IAP.:I2`㎽?g)MQW!&OW㢘huŔ;vtt@Abѧ EWDk!{]1Ţ W*%FW$L 6w]1)cWSԕ7QkIQ7X)"Z0.w]1ޟEWUt{Ab(GW]1-@b2=uz%)d`4btŸN6d]1e,U|dcew1M5vtGU-D,ttZ4#Ev8? FJxͦpl:65MF)j+sgsL|zu B(e>[ z#/$`ob{ɔsA(JF1b\i]WL]ueѣтt/MZvŻINAuN9c銁qk 6d?ʔ󚢮t>1~6AuQ/V5ѷFIbtŴsS2v5E]tgQE)bAjA tEf\+fi]WLʨUQ*?D4GJąqDZ;NtHyEWF=^W]=5ϖ̌=6ޣ6CKR"FrjIZv6̜s>3sx)s-V˻n-,J6&JrPJhI[#&cܑOu!L)/39Q HWA-EWL2آ D4'o"AC|`g+ҏz}FUw#* 7hmT# Q ]颫GWw^/'M3 cWY˧FM^jwx 5/c ТrvV^rCJcr12;z '|iSή?u-?.Mϝ񋫳on~us~.Zϋ~o_|kojƸmh~~mwwj|vrvcW۶!];>:bBm< jT4N趜dzUP #e_uy{YǪm f`1ʄւ8~0:42 "7hj }LC\K]Ҏek#Ëzymh궢W.nyFqrsRiC}C/evvXng݃_;=Y-On:\;ES~vV^}Wo?(=;*}8dٷ,OFȷ붽gQSC5.i=ÎQzjuJ!Хo~*Bzv0:Auw6@}xH55hPwnap|o%emuT.f:vbJOwˮx/(ؔKw7/W?}@Y]kͳv+3~˷t*mby^?)s׍Qߟ~M_?Pei>[~+>ۤGjdwl8o<y(E9 mBrޮVK_=ޚzz~V%}FMt;2;U>tDXQɮ=TuI/V5q ؟Y%zoh~ǯ~@KʅuO#v%o'o}lzߟ>僞=dOδmbc[w*ʺP; uklm[V5M#=V@!iQNn/q9qHF'fe{n:뭜o':[»?mj!Áz0޷vm T0mИ 5\{xpړҬGEHvrl2(v|&-H ذ3{ =WTǶB|_coP:PcVMjJ-ڀ7JSqZWmh6:g[ u(RE"FUwmmlo\kDY]nmU앮`[MS3]k!."E']u=H{r^H3xoL#=q= y{L>sKuoႣmۍKt4RtdGGd]!$}IU")lu*MWY3ڙrzZӗMzRrIZ67ioaX,{(7T\јAp) >F=6"gTzR$(OAaXV@ٻPzRa>+3b}U $67(t$F(0GcOy @a2 L8 |vUX/V嵊U0>S΃Dtjbfӛɸߦˁ4;Ln+Nhdp{ĭ;\fzM1 $Wrߟ@sKR,D֎Wv֡cX4|bZtz`Ick?<9.?Ϋt{.l= vle-[w'mzP^6xU`+-t FR;S 1S(7C:f(`q2kmGKQn/:_PO DYf1SrlK*a;%B#/l| G AyfdVɹԡmE[ƍ 1;ƌv3:$^)/} yw%x|mVhq=}D\Yw}n;KV` y XD1\aɉYx:~? nx:G*zKȎRB<-]p;ŸVX8-[d "GNLc>.()uB+$:6g4mdJK^@^%hl<)UgqY(Pzշ i9P>dUE^wy]PDږ ^4AHUxe61_Q,N?7.Edor4{˞^(Iƚd=^V#a ͽH.MOjy=]UӒAn$ 6L6`YY!"w=z6(e͚y,';3{ܾOJ3W~+"ٻt]U_֪zDƯ?8Dʪ~⠳?@&W@$f0EqHRA_,WP7Kf4k{x)WJubg}.uDs/oB> R7bꙒ&8QٰNjڀ_ן+x~Q l*5R\ww'FwZ<)j쟓Yq~ أٛdv7o)b~}]'gxi6 '#ޗ[7ѯ $]ǒXK?IjqVZ1竹Oa1|u|re:؆zBd<)k>Uqw[g))z!U~O8ZnMLkܸ"[7|!MPaŹCTZb:5_fHцăy[C)8 ѶmOTa3mP8vk=Q3B*RFlˤ!!8|wsZO|J;r9O x>=v#!x!SDhx @|jړhqhIŨ:ZQ{|w~<ՂA*В0uJi * b=VH`HzZa#"S$"\JlB Oc*5(K)µDX^#:Av6,.JH.-b4\ݲobKt~ KgEaYQƃ1B(! D:XhaYO"5[1tp!}vEӆa LE$",m03,1%``!AyZObp]%C^QQ|*g^7 )1 (a1PI%Ơ [ ;I[NN3KT KUi-%JL¤FF`d9.^ut̸O5(*2CFC54 G!J'8#EG7w>E݆=.oq!|ΡU pf$9dJ0J$IX~qשQZ ĸd{(wL ͝a;1fVG; &a9EVe+PQ0EdN.TSah7' $ t^kUi!2! H ˥CR/o`B<]%a4+㪰8_ΏcbL6 J U:dӟ.g f淚^)NO{ᤷz) 8SS+r5LāOզ9WZ aCO93cNwK{qӳɥg:<Eq)jiFGN<^I%p4,H+Qd&-S٣p!CNwGS>vDMOr|yǷ[35cOt|X`l!8j+,z$(lLD\d$n!$kh֩LGCJ9ōpiU"(F[ :aFhNPƎe ~cY-)JQ.Ϛj*V٬mE>wSQ `* bx$Htow[U]*b[MG ;Hx#97L [MEttG{G{0RrGL1Fc$%^y.r_Ƒ`ɨ( WXˆ1",DDꥦT%a$E{WFrlcxfA&YzuX)RCR`}.H(j9RDAe{ %ݚq"AY2IK H:&M#P 2dL|V;;Yj5XB}I,".Ŕd4{Z{0IIhI4zm'D_S+ i%IdBC!FQYP}'# a_줧oNN2  VLtR >A6+",ӹD œ"8zU"M|%%CuB0!Ԛ'ŭQA( :4 Yq6JiȾoYvk^r֥4hfi xM $FDa6() qx;GX6n(dOXwbžMiQ_ڱw^DMڮIh5D?Y$M&D1bi ?iJ(B6CnUY,b+IJKxj;j<^aN([rge$LBTh6fA1w$ZYP@1D-4|J:8FA'ST@v骢SYZ.*jRvQ,Qem!1 f,q)!uBd$ĀUW[hb]՛-آz(>]Odt rǯtǹAm ګhڂV d,lQE:&s+I{J6Sƅ5Gט^C%dӑ%Đ c0<XRQ$q:8V6%Y#l]JGt\Ef1:̑I4U!؇(D#gYH0e#FKjAM*A4X&t GlȕX4!"Ԅ "+:dIĂI@0IF1qպXHJ-43dN>pgʃ 9q͜0WiR )GAM3f"8i%K<2Z%r31J&L B KZ(+* O/֝N/HHrq.O{ըdO(*u v'`iRV# ]ʚƐYZr 3$ĕNr(r/u+uey~DXy mLotȝJ& waƏ(Q:!wx*lp5+MD0yIV GBcګ Vy-z-7PS!GeibAa~|_1X^e) 2 B+vde8{leWK)E6YmeMV(`], 2EmWK<K>%gj!=8vJy5{um?尝|_<;KqH{; <B0UtWtdfk8:kާ>j"@"1. "hD4-#Xvїvf-vxJhre미. \  $pϤ)R+ 9-?+Y'P 5m-'m%pyzd$қBr=Fx>պczOqNa"J0)@*9^d]]1B9D ( I_P.E6a ?0p }ǹM(tFP`ى}֑[N?@|F;~y Y 2iR dŵTȥ"#`9 =XxlB+y>"0֪͠vmT nDS<Wk"-Qڻ4k?EjQeJ)Uޙ  :Ղ9!ʶn>W !G2Mbb좹umAm$S|l5kԃqP9s<0sqϳd: (cP!N#1 [16-NvSj"5R9>|HfFH!.(e}&}տmkhq%ʭ a+oC?ÕriXnlك/pR9mVuMvPk=!'BZVJ Ǫ\Ѳ-Q4 s)qv-`lU)jxk$0jSjOc!wj 0.+5*sG-XߠYي. ܳ X79GRmN>1#cv\0'A:2%}DtwGǛ%3'S7wȞv왇ɄyhgV&LqB ns_+Fp +f2#<'"xcu,#95YF_[fT.fI{ſ_PFݝ,z^+g+ḽ5&(ρM~Ǭ!qF͛7cg 況:YGnP|)0$1`#)R'"q񊜦eRgy騣JR{.6Et=[p&ޘ]eqV`0\cn e?0o`^>V*|L4pM//(Uv͟H܌'d)lqk*o}Lң][Ltaѻ͚4|l%H5LkyNbd<7%-6Q:z`z,=t9wW)T;\ ޼MNE|.M:%4N6% Kj󢙾\]o=%,@[kry~P?=NNVier.ZbE,jurBy}X_K40-g`Q\ZTwhAR9(wywrizkP"_*UU9ڗ5:f] '_k}Y\lePKVEMPvvy<ò4!IWhr 5 >a;v`r$]wzP/%sk[dΥa1 5;+w^#D:_](J|H>_4v.瓼KJ{+%˄|\\i+2*-m.pP!`.e]l ǚwQ d]$vuAca;'k3Fi-#zW\3mXFW> f7?;|p`5 wGa-^9q.}ofJl74r69bNé({4G$"q^><1wuӲ I8$.ݳQ<|zŬ[,;%/d^J!'n@Fy :e$Q<7{uY _7E#(^P߽7'7/')^Z}5i*bFdcgJRe輖"Z(3-rr)mr}X=*^ʙFrd>2i|޵qd׿BSz?C"$$k/ F=# Csnp8jX^NO{=N)ȊLZ*T׸w2liEZ)RCzӵ+ߺHAEe#Yx;T"K5\s27ZC`hGh1AǨͧ&2=^Q˥ޭ-5+|BJMU6,`Rym&5!zIFKB E6[$"Fzy~!)b/9%|dD2hԧO{.YҫzY !sӮ,&Y&UB%%sUK>w!7S=^bxkv~%B!}tBDF]C(9/,%C*Rq8֑vm@IzMcqM>(ܭW*ҋqZJhh%%Ɋ-Atr=ʜ%x:j*H<,QڎMѵ(%r'm-)$8:BOkP:"Ҿh#U>ڑ`QOI=/]FT,CD 5 R@w"Ru Y)dWh!Vs"SM׆RӌT"yˌOh|z_ rdg'UEF'{'PC?up;~2Qh?4 I!WˮP@ע1Y cP[]J, qC`  VX46gG`q^b ~dBn!]2U VjVa "3* ȄF.սh\FƑp2豌^X@>ಬT ++5f AX[rd#6l3X,p -0 jG@A. L1R"58()$ؙ5TTũљtҁBРM 2Ù}(s4eB : ,E{dh_u6dRՕhދj\PRR-Lr 9 UlZ5Ȳ"Z$$V#MZ nmXE@ZzV\FҤD:38o4 mݎbF^*"͘uќ48)*&6Oo$%0!M@!w=)cصUV2. AAZ$ɖ@ƛԁy.,}l;@VLAG=(]I4TR4P<9?X!`!39yjN[D.iՠ*|ڄLkHtqi/! O×Ue 2inuG-ۀ8?,TdAX? "rugE3 jbZ1uDqw_b΁6DD-1`0B5t15k<="=C %h v% Aܱ"-tC"+P(vkhrF,m[v#i^ "4)YC c~6 3$l_'X(Ұ" njtM"ϲ]@rĪM`k>Zl?`%͐Y g#Nj4ZI@YIDe6RSң*p/G`A-A*w(a:JMC}_AK"0AlmRkC6?X{6njtj]vu\-:R&j` c;V$L'Kic+q0}ӿ ΝDmwWm5EZSR ΓDɃak6v@(Fjݰ-Ҍؓms$@=~DluPmkCQBD{=(Jj :( ,3 4Č.mz x"2`݁z{P}caKh*TO׮';bE_Qh"Ndvikn3`N_U C 2=PdQƈR܌"15aw=u#Z`pO +QU1t6nӺr0=XCXAlRԞKHKj4(-lF]A?uN^Y%9kW!*_o LWƨ8$Bi1 >-ZSrlti =A+Bb~~)h j|d$NC爽h\4*w3OH!ˡjIrSE!:.H,=)`jI. r6R =];S@] k=f-fїyk@L2<3$ U{-5Zn}?LZfi:l A+No1̦!Yנs> 6'npOB( hODQJ}> zZ> }@b> }@b> }@b> }@b> }@b> }@R}@ N='rR4l|@֛'% d> }@b> }@b> }@b> }@b> }@b> K|@&~ɓ!ZQ:>/}@b> }@b> }@b> }@b> }@b> }@b\BV>½Yy> Le}@hK}@b> }@b> }@b> }@b> }@b> }@b>^XoM<ޫN^tv{if9Kku^uR׋诤poϛ W'T hYͭ3ߚ N zDV_so^\.և*2mc'&O`_Khji}ڝlP1^NViQ򷹪.T+LE:W'd~=aA~bWgan*7촬r1up+98?ӯl ۮuBZlZ# ""Jt͗`}y/ivN6]ݞ.mV}qwi-w M8hu[o*J侒r{Qܸƀo,l\T8/} |PqUpGx)-R+GV,*vy~H7Dsbܑes|?/ަ-BCz>qxPď7F1=S_8'ԟ8Opѥ+NjSkAΦ8.)Xsu1@nSJCi ?ъ)( $zr눽?"P $՟IPR?eA_'&(ѓGrW2>N$C>K F[E˹4L8*`{OMGkdJGa|`ƓɸV:Z+SvKP )Zmk ]g157G"p#82 eƱXce,XꚘCZ8ϳMӮ~__0ޢ#vQh\!IW &Խ}imZ%mm:u`H쥪YiEG^;T*6]uŇ=,ÈzKb>ǢvځQ}S:>oK;\T`W;mSQtK9*/=]Bѭ`3\y`Mm8T(?\퉠;}JpX8pw6Ƨ bdDdD/i&E"mID,($%D%.iL){Ud#ve9m6:eɑu*ȸx_j$9SPI-Dcihhu1148SA2(¸Yp8`<]G3.KFmK=G*RERJJP" Bs05㎙@ 14,Ƣ`Dہ`9xmQ%R/оR# -ڔw̼v+.fwe/.Od+.f9d-Q\NtȾYmWMTU*8~ FAM✻JU*F<4Edg_M;>ҜA-76v+#-z,-]4:A*QDž)H7RO  Z0䝵Ax'8v]8oBmPh;ttzb :^6CWq؛l ?EFI_zw"Zkwya882on ]a{x?Bپ -(* &Pm_kl]35ml;u&s+Ssߡ!:xuBL ysUr`ئ(8-;UU.烩'\t|A36hcn<6wٞMB$?Jq9.P"DBxH9&=erí\ RpL(53gs/΅9 L_j&}Eh"TX@< *ϕh[ ӞMG*=u #P Hx!IX2#8pyҕ͂$ \,:a$TZ#!F 5#N iN}iM|cs K#f*ʡ.XW'Q2DK3d)4LjMQni.ɅƷ&3  H4!Mt%bq߇ m%0tM9!. M.mbFb27K8$Rr>f Z1m#_PZ]uQ]eF#ңԠTGǜA˜_ŒF'tc$|PErU3F541e !&aVG *!p3ƒsm2(O;kM] ڊ^~GB[7lrbazJ'|.+)/gnHc*!)!Qsʺ/OU?uVdZ S3IUJxi ͉9A*+ǔalSn/E4[TH8O|_掚hG 44?|NgqљGGa8_H[++?@OOv O$I=UJQk2R+P]{TNS).D/;xxEtK屴WhIs m4Pc,CeZ;aSqg`ȴ7jxP5nK#Ĩdbg18A qʑ2ta)Drҩ\0,F΁ޓP4&ͽf N#_d'jF4Nb{e zXe̮rwٖ^C{;﷙K,xS҂y4r{uќ/ T9k\dtr![r[b4Tm OhϙZCv/bsl"qi|kI[yޘt`.d ?MN"2e֐48 fI/HV{~tLdTDohʲ%vX(!kD6:cgSinf?;`awe2~yLM?ӤLv6~,ӶY/Yc]-7e=+\[QCѲw{nuǺ_R@q!?0gYYA}F:z{G)5 ,0c*$*com䨽 S}pAmV'+sfr]l6J//B>Yڱ w_ۏx.+}n޷}.e6ڧ3Q#4Y%c`F_Y 9{a`CN#,4BuSkR &Q)j-%(pIDZP3X"UeSʇ tL$Аk ,AkCU$0;GR9K (o9 & [[yNvGyպG<]xm';OxH&k@!dw"h"Iݪ$JBNQ<*tc4K<^' 1DC0/VA!"A6׺1ƈ7tp  8.ά9'ESrA1(\ r 9}|1vR_L[X5gys~uZCG CD= a!>Y/PYAۛG^Z$qƭQGFZ\xmjY,[3}ӷxԈ\UZZEVJQ%X-%A lG3?^]9s70~;MYm1.kz4%wyռ}iyhq_~ϦoBo~tw}w=}.SJu)p3CCbOz<콸%ӿuiI- _WkWqc>ψ'dt3^jAe ddy";3P =]ϻ( 9Aitd>uh-됸:5ZrP'ӄ} 7A!>u{f$4z9 ̜I 0omݍo&gcJ9 l9 '5vr:g&a*{t`zbGNݛLD nC f3//UNUw*wEWY6Ȑ_tOK"LKcL7iSf~ڲ@$ (x)oD &=AQ4OfjGe- 12 (Xr*P2J/sO4n_vA/;UɍiOc}c8 'ąU8S'u gJ6T Ͷb50VZVJ_9u~)Q R?UBX[ }e)J8.)4IL;)ɶ%eb'J>$)  0J |a >hwrbk3q$<5,KIպ*Ӣ*S輪;(/$BD2&|H24q ȩFђUv(>A)H# '3?{OƑ_&o3t߇aᕝgI'X`7Ч̘WxV3CG戤L*c9f(,@QX/5^#l -Жh=Qp{o-#As9(՝)%DtimzT!$UHQ+TMmT>poo4eb8xqLh#sx. t!`ymc,`pC|}]Tz2ws`B`4Ai\2uNu\ŝCl^=;.ʹ%AeB}fsMW&28!b{!{՛&EXOMΌ$beB&Np*fW=xo2ZBXRG҈xD3"C4э2%XSD߃t4D{S(; ZDro0CJf/hslolsP!@Rl0^Pkl~HOA/};[-Jz7 Ҏ]7bZ{ ٫ػ:0=*[5hބh~Ç=Hx#97L [Mtk֌m#5c Za#厘ȵc>2"IJ\8R*Ƒ`)( Wm׌'5c1&pvff>{_Mg : X*+Nc# ΢S8&8 AF<๽r0ΣUh4BkM(o0*GTYJBD2{ཁyhg=>FXEKW,Fp%Q?,K`_-c1+A\10 NҹVpb*2|%uy&{E2NLLrw^BSϘ8P=x:mz4‚Sѷ';fR=XbrO0֧g7D5`ǒRMz5Ǒyp[͏{*.h#1>g\sZYFqіFY?=~ږDxI:4 T` (cN:ZZqPV+lDTcYReSt3jf߼9(Fψ24f@`hxYBHĨ{6"@}@k_,5Rۋ*; H)XsRɜRafX9cK``Q'dԎXv0L`o+73VFS!6D0h z*a-4ŽHPI*R.p-oh[ HzD~@pDZDZKIR Y6 #.diiόy``=c!0DޫȔ2РG9X Q:(:lp{ǰ!٥_HCzwɀ e?y]6hT\Y6"$$ɾU!ݔa:6.|}b_naqD?_xBMQeRg =c]HAD0u\NA7u^][VPVt!XK 0!=wYl.,xFl>gu66 f f|z_!VW9X8tVwEt:K&9BreJNYy{ "LSyUq|b8 .OaޗsKjn$@j7Yxo9Cf$xHoiP7 ։-0Q!`,&M>=]f!'I?ɺQpՔNS,<,x$ ́|M\z| +f) wq6*XRMy|YoX^\oo?ŋW/Ϸ/W_!./yqs` ̂K@5 v;pgx`haT-dXW͸b(׌{}@c-ڞ[ւ(}z>/zӲ!آN6[VZ," i }[N;zEշԃB(!fLPMGI#}O"= ;szm5HHw RM2ExaPk&mstY=yy=Sl vR3A.GzrXnX8G$,;ግRs#Xv_g/:0EhZ= ]衻;O ҳjSwöLa\U;nJO?yAek. ghw_~&f,p9&9܊E)2L~8q{z2L.|?ʦYѧ%=;Ɔf߇r bFax@#DxL'삌`#Sq'~$PP=b/o#K4fLˀ €a:fi, =l7ç*u!O e f!P}Ugt^TgՕ YY_\)gfpZJ3 uVV+ AVoܰ&uW |{)Zi\_^8P+#ڜ-rm|Qi&!wZY-t]Eަ . )oy:*oSu{ڇ'[Xc|S`$4jBk?zf`E&& V- $ isf1%3+&hΐ aD%ACJ~bg,*ֲ-)d_%2SӟMȗ~2^ie)q|F1k)T r˕]$dSd锅*)_ru:VO0aG 3ϸbb09,-\ ?%ք^|R`\ D}N-y2ajGah;d #!qeW`A1b?>d^AF(Ɇd*(VVv2vxI3ռM9E-i"4")UBR3X>JVr/`U|>IOr`Ǘ}`ML:XԔCo$Zݝj67]5P+~V}٦Q}cZ@8[|~ǴxIٳr kh,KOɴr?΋\E:+?, }jZʼimF?4xg5މ1Zh*5wfs ނ`n4g_Sگ;Y>f;wi0_cEXPRQ.kN q0jٟF?wl=5#A( EH֊< AxFmS8S<\"RJ]iYZgNyrLߧ%'ȊbJƒi!Lt,+sWba]ӀQ 3.BP 'j&S#4g|'b'|l!U!v4DqӐB:V n>]-FQj㯱^&(L 8o.~VxFS{ [0)y呇ay抳ٸV% A2u.[dt kƅhtZwr$W&&CR x[O9}ioTbҔ?o ':E2JM^Sm/LNoŸN:FLn,|")[1k\serPZmIXF:bW]@y)O ĽGz՛0MwxqA~7ONjq~|bh B.u'(Z*4fTV`#9ʚ8TsV8:6 "079pN+xF]z{KI|8|=Q1_z3דxJz1^!PY΋o F'rλ:5WJ^5  Q 'Ի€U3H^ƌ8 <[V8.)w_ pͼ?(\ʯ>qc^KFIC/ӽXxCoWTúQȋe%K-u-/ɔeÙ6d ~A`D#* a-33}t6ߔ3뻜/Evk%=SJO뵕1=asa:9Wm.m]fmwspm![=%Dw虃dv^$ߦ$_>cP1.ɻ9d!W4ڽ^/F\yZM7ON6$+m  b¯4lf"t4c}Nv%sWV Bфo6:\sJ":doY>^?d$ց>|q|st?m(?_ 3&InOAc~f+$&zI~BS  l E &R TDUr);s+fWykX˴ŒxXETV[i֑c(fY[̌M'ZYU6瘲!˩(D_Œ\٪v돯7QiAaoGtcAfi:t(do4A`Lk <Ӑf$R(5dF*W{1UsjJ: y= ^?v0vARf2I!fbQ$|8 !Y{v .ZieQĹF XIpE00b|M>bS\؛p*އ>uʝB<$W5C9@dٸf#1t8"Y#C4P"'wS!ezM=.02005Ut(i\X Ƙ YǸH@0:G)=kJUyrԽifB2Erh 3gLˆ7+p3h &WPZ& ezp<v¢moYRWo{~{,@r0?L0>S6H.{}x e$Ń4eP&"H\eF .)H DBkI(-_Oʴ\C. = ֖K\B$s$GZ~pVFlYV|>zhIfxV$ RaXJ$U^`B1_'(`n֝ Ϛx;LԞ5cV2#۸pR柋Q@NJRy^(5 <$7cf BiQ LQuD+iA=gfOӉِ{I/GN7J5}.~^&+dCk'릇i{Si-}nw&~kP5Ҵ=8 wfo<G%HD^Au G73ݦ}IA Bo3z KKceZK 6I~}! 0fA &*ES%DXSU9) SL3B2Fen9yQBDq'^v_/'Nf}l|K?0Ae&ij&45_' s"TB͌<8_e7{nTzfuL0(̳9eChfy,K)N˜fguRbw@N`8k`$8kJj0- n1x#c&d c` .LXoONW$@\uXiߔ߳<sfaH6fSe Jdpd0dNby d -v$LIguREcEsVd<r8,àya Q nOK Qr لLbuX&1yWHY:&[Օ%x+g0r!A s {iK0bi{=}Lbk'Hc++ di|:sܘ $d(xj/)gt9@y|v]R))?THX 5gp]鍊FTVS}YM< l5Dy5DM12ߜ{*Kl@`}\w{Ę2 %( @~{iXSС-*Ěfp #  hAQ3asPhe|C}Bu"2&J$Ź\c&P,Qas& &)03[v}3sd%[aakUz&e%$i BZC6#*}OWW?;}[b^?c3W^oU\JEp|nb:h-2͜Fm̘RVԔvC{f&2 -&=|!Xɸnlm6ڴ3ܕZŕǛ|n2 #m-|G2+]&genWy?!AN#oFHqy(S!*%$:r?}{#ޮ $ӱVr`P{% !Y 0!huHrck9>LT 2@_Y-p%Dr*Z%ŴXn/Q繢\g_ZPQaGuX5]b/k!R!r=^ Fe۹^2!t߮7ɜ?۟׌<@\/>ԛxzr45JıcҪ݃t{bQfd|N&dޖ4K>13e$ϋ"g-i{8Qs/R/F펬__|2b~6rq.Ho{cbTK/EqX_̈rat`_Μ?2F1Ctf97. `b]Ɨ~id{\L.jޘh^la=o(|\[f!ʖv3״< 5e =tR&qE3ɹ9b?ebQTm TCI נh,/P:P (TێLpfכq!mi&Ru-T$ ꕻo_n$_Qʹ`|_.o]kCAh5mo6x fI0;dC$//JǶ1N6S>fHڅIpI`\tvZy GBՈ-N-cfrtR|tnȓZlG}X1Uo3Nn:KVSz*Zhnf~m|BWH5f -#lЍ  g3ZylA 8Lx>vT)Ծ L6p'ȑWZPc+:n,X؜r/@Ц]{G%^Z ,Z Cu݁|H ,Ot5_-774e):TS i#1͌>{ΉX B>=DL`]O'٤X5vVIar6*ɲbl})˹uZGx^Q1ȦoxEBqBGϧr/"eY2  T&wn{WAʧlm4~PƟJ <#Z^(nO-/yi/K:&ӑUX; ]}]:%wg~f\ sU(.@"a@%&Zb&#}쳚SU`=:>쓄m`dt/ D+ L4 TB֓y8}= Sf{X9,{$nlUjz,Y1%Ǽה g\Ջ%rEo.o)qDzO/-|M>Gxv\=Z{E{Ar3k>^nj/׺yAW=ƣ36,OgeT:&˸j<TeU#ÄҦʫ[e%\!-Jx5FsPqXzfxJPzB=[t`gfpM-gSKV8.C mN|lEyzQ2: ? 3;87wwqpqCvӖ^OWu ek *zG^* q٣.ӂw@n,&P")P]0ZnIp_^oszU֮8t};0 0C-n@e&yTyiPw9+!HC9%d`UPQ20MD7͓ +u,J=;.|H  `LA@A^N1)}aX ^gijBIfNsLͱc3Aa< Ve1^ :K1+Z@/P%:"4F/YKPƼ@$Zb~u;c t@d/AyJ^~֟we<cxoվ(z3NTβK0D*Q&DBbg|43S)]fUAu6E=u|uef^%@&xTbU;N>sRϧc\g0dVlՔL4UH"YfTX>S=z8TX^/5D!|J׳u@lTGx.p ؖ#*i)#SKGV-Cnw<9w$I il4ZӽJf[aъ|a,MUEࢤtEf=q&'ltJleUR3~ܜ1U/XTGY2f d)bQCY; s%TݰB] _ު!q r{]b"o}ɠ(L?+nw4.j2Ԟ+ 4g uJvt4HӠ}\p&nElrPcwc+`97&~šG_f K~*UNBpB_]65$ʆaPgn"a(Un+m2SF$=c3ƕlfwÿ>V֘ITR3 jjtM*iafy2D72 %Tp 0/x[rO^ǣ0KPBlaYn:\L=e,t>a&hCh\Ayk2VZ\FWf5@ &IeO*[LE3z98NT7n\s>wYa-6(ꌳ_¿*)b,ZZ]u@$b,|yf̮EdYZ"Y}lWpEL8Zѿ~%BaMg>j۪6VԽb:&Y(ʞ4 ev\cR7V|_d;- 6v˧bv=Bku5sĜ3+isb'1} fK%!2j q.3&REhi=O}|GZH6}"F6(<pZ9M=μ P;Veh;ͤ/ bM k۹zvʪ`'Qq,{|–cr"%V`6(zosfa:m Kb%N)SEW`lNqcKGìg@5P(Bo(A:vj( I5$=W?yd0/B c >ep7/OHreRQz~-_ x>_Lj9Hk1`h^ܚ._=8> 8Dafߗ/ɼ+slϫkΧ?pngMwTnןWk?~`}`4kcO*>,-n6M-yS.'ax “ip`&| F$9 c~{ښ^u^+e@qõ+b/Ta̕ -lN;*O#s!3b>N[`\>ۃ`qZ0\$nյ9W:#MA"x` gV!pс8qSf48^Yf|#js oz $/wGk4,R 17%m[פWSqie0]֓x"#Jb4KJg`g%je7Bh!uΠi]t{ Y4]ߦSxuWujIh CSw"of̫Z 72}Q8a,U}ǫ0Q 0,3,žwOjQ:۵Ft +oyTB]1U&X!Tèӡ.$yd48ydǟՔΠՂP/].& p6>Ip7yeú)&$l01 E]Ida,RsP.u lM#Sh下X1f N C>ύQejMs8Ӯh4u+ `l)+i&oy}/$:-Ysdo'CWČXqRI *3 jE*S- ַwRxDhM V4r+_4 )!Y$58'}*ݝ 0BZdR(%rذTGSy( ҫWF"ߏ#27eNfFn]~e 'Q N^K)%6GWRg}li洣j02:sػ&s줡y8~ey1|U-{ű8W ﶹkݫi*]6CGK ݢ{/ݧ3Mp"}BZ%{z޿lI%yԓ$u-햻 >\1LnIJlͬ<[\ PFb`VyJ7&q gAgvY{D;-лlSqӠd>l-ӫ5hL+b Bq?R0GJuW>U ! nlj)7I2-^JEEX,RH !$ Ҝ&(i(i).rjMWu8 :mr.`iq0o?}'CTMzbᝂѲ]_H$DB)}YX7.@jk!EVX1E'(% Xkrt?iӠE}/FKu3Sk|Y [QgKij"Gk@E!!ۣR Ȍ(gYZՎ,z@ D?m^UZ],0mdmΩFIbh50(Yan~0MjJ&Z"xU T KkU8A%># IHA0|1K- K7ɱ@C nyh㱟l\ݾ$%dC*ԜP"IR4c5ШFUֆ`u8L 4.EoBTQȊGd%h:U`"Eq* XyQ/h:M)~/P}6oZڛL Fb!L#c3oq!tJ"E2M.QIiY"aeh5и8iv>gKH0=>s*GqT7]\ؕg.+JeEfvU%bVC%۰.ϢawC |]yV] `OWo sv]e~%B1_ QK7HƃgyٳUkm%jgt~]8Ƀ*kqᵩ?\pʳM' Yy%x %Ɠ,Gho'v;~%Nhv?ot 4.j]"Fs DGnN9pKhXΡ"X t Hs.F7r 4: 8菕#0[665vpdp==BF '%8+τanY`Gf;ukE 4G̳n8`sj~wit\q+o>x}|O(9S.2k*uB6@{Ў5ZXekes!)* )zàW COI&d)}+v8H"AtuxN~CRa3·ɸWos{MYd&SɓeosW5T̹ "qmGv<~sy>s3b`WrS jJ3P@h%$ шSt9Q 43^}dj?x'dMcZ#azws11!1d}s<3d[~?v04SO %̜֙v)lOWJcl:lPWMv%*D`ccy@5/~)C_pkXezا;1igwl:c&t5FȚn{~NC4{T#PmHV+0]|dcP1~'hq>'ZF!wϰwGCq/k&!Ow;璆p'zRcl($Tӥlum'_PiH̢ W"v T8H߭˗泯\2sXg|~  G d50DOŇ.V^ssZ祏f~dp3TQz|"elRKsY1NGî0yVq(+-£d+Wy1KͭMGNۘTpq%辊W,sڊ플,]K|4z/#UWPϤ2xyH%%tx?T:bK;=DKSi pŮS'C>K4%U)9Bl|kB!!*p/>9jDeX5t: %-N60V@F0|e/IiXU%NiI?v{3Rqf] DC>l.m?vVxզjw,A*jiVZ^^^PlWPFK7D=¡Ja7֘F%:Ԃ޸kxkfZ}}zFkv#C64( 7בIMƇ,J{BޣIMF-*1nƹAU+n/4f:e{[ NQ oً|aa_WX(eύr OWlC0'<Ӈp)xB @T:C$:NVɄȊ6;a޴~]]>χFM+֦?$eOi3Z]h7 ^i2| 6Bm 9gbxekuRT4N^ X5}t7 Isb֯ =}lAR+qNnp!-\Gy,%mnLlRܕ e/Ltp?r9WJM|ǁ9l2"{BE;ي QceV*G.iePB97dq sQM{lTv2B*OʦTexoǖ:lq?{Wȍ忊``gg0nއ,0MvW`p˖e$QRɮ*KAi_"M{trm6AU䲸ͦioӣ,aꐁ )J%[|HJ@X$7Ơq!|\l~p3=ORI7RTL?ۑB =j\ŝy" *g蔼#S6M1GxJ3N<AXPN{NQn{T3G*%ӱfq;vW%|1;O, r擯tʞ9*ay}YpW&r$ )cګ:7 1{zPgw)z_8S=bqr.XC;\7`NaTÄ٠yC#`+rD󵞽x GI.|vRr{qFpzޏg"e!Ű}\ٯ%ڹ3itR<->IGk12E"r j1 R KLټ┾WoIfkgJ0E 5v׼ڇI)\0 ^uZ}ifVR_LS1#jxm ]XSv , w~D }:G?T2.VD#Ŝj¨w !)ݧv':a ,SȨ1y2o[A)ĀknCXdMl[ Y\6K& *pO)֣k >_3`!sMoJ 8[v!BV'Nio G4(*{@DbE1-Jϭi0h1<~R0l}<NM&@Iv'] {-&)e{=źb%pOVDO0DT :;[-A@JDSv,`0blJ#ooI/;Z^U/EayVO35=˷"\bAq/}"(:`.h |UŁ5*rt_VdoM>3 UeìɗKۢ<г2$#pgTӂ:kT0j-BjԢX;s"`/9s< "YR0?H1^H,"kb4,0zO5)(oE23jѢPKs3 K\·lH8ƆQh*jPCjyIw3k W`0ĸp{'CX_L姆Aw\^HbZhHXDL1mlH1dv)F`d Dl"s5p,ƌZB>N.XLђmD8&jZ YrGz++`_ga)Jh~k=X!4]&e -rZXmAYhIBĎIPIZ|f/h G&K}Wՠէ?zefdf[Kc%/BT aq Ū#}(!Xn n?;&pI  ?a;cb6UVBOAy-;GeLeJNUB/7 S"6җId?9y| =}!O \i[Dڰ]Ns`gSs-&Q0x| ~v}}5myF@GSBc5XTI[C9_2|kVpijkkGX: xqjCQ l9;hFk'z xx2" ;,!X%ȣ#~t  -Gu+8BW}ͮg}fn}ԑ;N=е8i>%nBN, >! E"ki}cfqigfo~u1Wy}up&1]@N|{goqe~XpM8(mR+N-G~T[{܇v (]hJu J嵓khIG)Z%eDޔt}.v:5ѿ& BܓQ!0jSL`6d'0p6TU`2FhydlJ?e j*{timjG:P^|`+ܖ<߻K%NRv,0|lW5o(cZjw!6ľ{/fzwBy#s7R+c X(_N&\w<{I,q$L[Ü%#?O2\?޳=܅G+Q#`dPb. |un* 6e]qL+彽RLcz/̣<> H!hq(CyL}F\XVd;/*]|Y/bUiyaڼTPm&l?!fW/R^C('߷z7z}EuϞ B{kZRX@g-[Ҭ۪TtʬO|eQNNQuDrJE;6}vE}jM9~ _Z‚b*<8VY5D /"]-nL$!NNGWP>c?v''A#nu (eYiZnC)E𖉓N~v᷿ǀm ?!r0եZ"#  9Fe9jXI(/?QߓΗW~cLRPPYBiXpȗ^ X8'-VbY|lԕ^&c8šc.}i!gS#B cg܆~ ᶟkU9{RDU@K-4Nmq:4Fx>r>/?Ϯ w l|=/E>``A]?^eWSrR\d ̫*EQ2wh1)nt! j nUj{AaX9CqŠ$*^ceѿHI;2]5bZ#b>tkud I\VR%4. Dol%nl_?|8 i,b`BR@^!z'BAdCiqh, $ $K/Kp100 0Ay.ߍ߃KOPw$=f ;pWUO> #HKy.`)$OI}M翰4\g9 T 0gJ 2[:Sd&>>aKnl i-0EYjK$Ȫ@59Yp,-3E{N9(5+JD yAB4,'L p++F(L@RM4 FĖ)'W4eMASncA:&)3]N|HeY!gu]yS ֧B6o[!s_kchXoFOo,|aʉ? E~Ϫ%'D[͂'.$;!i%?Doš)gqhYrVRxA,D^ Lr$RJ$\q q&0Jr!@3M%:u$N]D+h1sSw;ɓ!=B KN IO܆>BɎgb,8%y1[|WA1 VLo6ǐ|ݩP Llֳ]HR²!-ZPBHD7OkB5 SՎWXߣ+5"/(, K5 3>gbEYN'mbsUjΤ#o&B>;WW%Q8ҴGxmWbX0^pug辦~&>9k;I yĺB~{bX-rP .SĚyCq`AA;T7]&0q=v5XkP-[4 *X EI՘*|B5%RM1ƽhm0~JY.VDBkX;x nm>F(Px]-֢%c@Ɂ7 E`b-q'깒`g%NTa"ޑ8i'&n$VvfT ^ÑL_d&d΃*8=` ), [DbC  NJVxZkAw%]NYQf=FecTe=.jH5M<Ғ8HeYcP(fN6 ` `:yt(.A<8m+99I$EfDk3'`s^RHm[.n4:Ge~iQpep0qGJZp3)RnB /8k7Etjm?Z%5 /C cc&*dV7ruc(׍jXvf,2:4hc h8s`|ÕHބਈa BN[VөTz[ 'gh8^n wOgϙ^6SU~Ŝ}8v8>N// _u ͺӯ'^w&QL94W15m a'K\; :sPJ+)#pf 6keJMYԆq]?JDX2eBFx C^[ Wtsy7K1! ^]%֢ev{Vs`: 鰃[L,{\O~\ jxqrVGgO\vwKrʂxy<$BɂzxYqx ~ay &kro2' Fbuս-> lvb?[5*_E'!9IpV#bGzp9O1| ˅ ӌ߮#EK6}dPD_gD^vFrEr#W* J,U#cpӺ-p5zh/ d_MȪ:))7q7VZ*8B} 6GtZb̚5(ɓ+ľ:5eQ+7ip)O.ɧI.ɶWBN@\N(NbcÝWn,o:F52 4-^Ѯa:6W5oƙr{}7L+w26&ϯֲxٺ>H+lm}6t嚵 m`{E4B\쭿)0eC ~퓾(-ހj~*jˌq0k?no>x]')I6N dWLyi}=]~~<`J>76 0%Ǚ9M$G$a֡˕++\K z~ ?bڰFY33lYx=v~Y4ot(tg (`JRZÖ%Qg~K]%f=ydEX@1wtxz,}*d6npUl+P| nzu=VP1uDENӴ 8vhmLu>D.Q0#CJLQ:}TW^LP#)BطDZۭsV"Ƈ0_/d S` JT.d*$CFzpL=qj;Tƒ;;qfJ$6A :})h,)Y0&u~%l* gvP 6Ə-։9pNO9O"='N[ȷmM Dr_/U9UYgtF;q,MN^%A)x 2j?* Tka- ;3SnNbc|<:dԵAVn~s? ʁEwJYP]s lLe 彐 {_'+H ڽ$۫RKHJr\(3y J ,wd! E;Uh}^jRRh+&q<)ﹾP{ 8w$WUH$a0 4V^ݰy)@:6RD?/Uՙ*Q'1)fphD U8O(Xpßu>KI1H!'HұDYg͟]=ZT&d 0X #t@+eDjdьJ3e1JܮE-nYsY3sV"6m_J=JP{CD!P8c Y(bL Va+흓N s5@RHr [Cϻb#CgIxsF2V IN(7F`,A[ 2_Ւ&{NL;;y۷={Ee1}9f |R x٘ɝiD:8yGXLV2j*u@ ~ vmr6Qf{`[8a4[#cfhνkj9+jE3`.bE*uB|":O|kxhPY1{NP(MRk rF`gz>U< jaBttb)k_6Kog9F_jpARTS#3Hʻ}d9 'd Ȣ?[-dB;mf>=w^[Q)Y(%4Y=mY[RCu]]c%&צʦ8stwwW -;Pɩ:`~ ɪ$FMt5:.H5pKTRZT#cOˇ h^7 xa?=VI/JDDYiz~<j)rթujQZ >Ͻ?,}:v ߛ"cm Sxx$8&~|52>c:DC|)Z[%sɺ1 2*pʘ'\TwV&\T#neb%緓36EE Lz5VqV9K';Ңz9"}rι(9(1ЂU u hNa]2 OnYTӐywua,` -uTMXbK,.KmTt4gQ3\ϿiJjd|R8Tl~7).YuOEXȐ!PEWhPNj4gRiyxlRaBIʔz Rިs[{Aw*'1}SZ:@U[3wTud((=Q34hHdM>R p ØAĘѬX|R(pG5DZ ;+~vd'C<F@ל`yqqBBCU9C&|PWOYB\MV&}6%]ߥ1&9sa.:VT#! ..)S#K~osXcšPeĿò3Y*&HO.6\T_08l06'ͼ0Ѷ ՕiԨ-04 x-=8Bڎ rl݅`U]]d첎?vPƐюq\xW~j_Ww输&ƹ6Pib/$ƒkRľ:'#b5/9hw0W# @{.4{s+Ļ]Gf73<‚S{ƒ D=qu_$sb;ITj2?@).4M0(#=8 9N頓%38n,TOL5x=n#7R`mi3A. ۶,ir %.*%eTyIlWxssu4IBHjwEd=9=* }HA2hr~KR,4ixaw޵>8̏ޘ6 jcӆI-2'$1EB1s!kjFA=ϪJ'+e .aki-n o/_ xbK0Oեg+*C=4=`5k3fWl>]n%I\~}<0LSaoJK q>.<On,GIqCe|ʚF4dI9JPjG]v[|z!q$eޛtb_u0Ui15ͧF`b:O|Qom"sAT-'Ibh4O5(\cldWE3"}#}qW|@랮QlQE Xg"acgE_ʸR3-Ys RbkFrgEk"_VqJ6R 0$k$I9/q~T}T $Q G{nyOmOSA' x[MG鮽{ꪍ8TT/Wj;v@_!:Z*- 9L&H@ f;(KꚔS|RFe=NF;QK&u6eSi^ʬEyW55h =mȴn1{ -^D`(RN=ZT4kt=84yxr;ÑԆrէ}X>_.ZIKȝ]O 2G1gh8hwZ_u$ϒ?8d)NƨwK4kANf,x"V?c E uorv}!hGȅU܌QW=XijzU_)aFW%{ܘ` 0z-Z;-W:h42ΨVF/hpp%"KU0ZgT?Y"RI@~fk6]>+A#4a'R5HȋjSϨérԾ2<ݵP{;~|݋t_ޗvze EBHnrxT7A ݪC!Vu~l^ k4-KT^s J'[߽K{/9 Xq80ރ``Go6jٞYֳkI/|eޫ\+rY\k/݃[o9~\+}j&GƱ9ad|v:=X]H7__ѦWRYB)K#R\;u|t $LY{׽\>]lqVr-FeQ/r4aUηIx]MUb$K%N&1mYsuŕJt Gnk=ZH/UT@jb}k+ '1k3SSrʟ-2Ʌil| 0zi(:ۑ@+:&.]4GQ]O_|[04}Vf]36=h.5]vv CGCm3Ms7.yc յ=3UMh5uM!$xJsIe $Y!fB7']gC(lJdy{YgN>Ҥ ة.v!wL}TR !eCFIN]=N':0Lz_s~a߯Z@8wsX$}D Jk?{v8Xj<8D!I4sabg 1㭸ErM(#X*LWD)WfQ,94)J}?MfgQU9wgr-C^M]nWOm\,z(.K0[rw5F†<h[ې$UphF+z^; iE sa6STwξnRCQx}eP>Zx% hFIPDfU&NՌ8ЧʃgL$|nt}mDɤtSRNq2 {lWl(09Zn5Nq¨arɂS0`l`+#wbZu"bYݕ8#gd_O] އÝ5Q?]45YjYkdRpג ة.v!כ74cBdy5wd*]-Gsmё/ хwXə$qVMC|yfOM0IYD{Up;PG*6fʮzcl$ҍ޾.3Vhh MD3׀XfsXԳ>GRnE-Pٱ&]dZ 0LkT5#2?v}mqC[Sߕl󼬲CNSGW%CR636g8g?oސztS~.$26@KQ1E:Jh3 @>-(qGK/qXZ]'ŸE+?J(oIr!E ϵ@sXĪ4%TZ  0}wtT =QC_Ml6 60a⑤9}fDW;?>4$ ?T/0(-Q,{(XѤ7k5VʗF gru X+};ocz$)78ei@ K:O]e /kax%358&i}_FyYEE$waL&>'|r3Kh:1/Jm$f|wPLosmoZPcmqL \j1\_5~N/)@YԺ.]H9"cK~u eb X8ekVH 8٧m˾6a=Pux F&c ]H0if5 - ]@W5DM/me1F= w+m\_+){M3UtbG̐=&(Cy[b7i2.~)*K .,ǨX0AtLqC4~Bia /,͌@H_4?U3=DE4"'\ G]E?1=VYWPL`V~z{Q~.ߎc\Qo9[ދB<&>&( ribىBdYJ 7iRBH5a4 +5ljq_RAf)͹Z:cxDDhVy\,̲BB)HO ƺ /4:5*əBM/HۓK)MH+50'hhZd1+L1H.pMJi+Z( N pٝ{_ߵkFc)eLww{]'4bN!#$ە;Hύ!]k8AUeE?>6Gy\H&!V&F Ie5wGlbS TQX+vӚ.\?Nȟn/` [̐adyJt:9P T & [DV'PAzgx4Zu8[s7s$ЂIAINHMX~B/.TCuƅ}r2OѰj?CfTKi$E rTiq=d:y3>fXRdoٞ:9v#Z+yݜ!9%JnͲS,wpDu5j4^qF"r浊]l׾u6}3PH@QcΌ8GndUD;oRi7.˳iv3w 1tyQuig=z Cw y-,ҟzy\,=;8MvT:bX{CC'dPFLDnq|vڭNĠOQQeX3d6hŒ j) t}wgy3-yB)qԂK'@Q=<`QI'#Vbe=;k,0Xqт ,!FѣF:B2=cpҦkLZx,>t.MHA|x{&Jr|{}ǻ(%8E)m룻ߊ/Ҿupo9R/&ͭZߜe3%|:Rxkm2 ck9 dnP6RqDMEiZPFl7uNj!pYr eJ 6 LC`VNaiz]]$ws)5|jmzq E|{mS{h!Z{hhvhhEStc8膥y豎0։ﱎǾƋTeo=у# T'@R־,,#/<$rюu[adO< ˬU2 *BЬsm6St㴓%# ZԎݦqh&O+6H/cN|+u( ns 7%>Hr䶂 :&^q%!a=VOے86^^Vnz8F)V ,nsݻМ,GL>}ïhsvـ4+vD(n$|Ev @(wsU,z^M.?u%oD agi:HCfIJ2  ~-ɋ?$:N1 hTD{pHb9- w !}n Xd$DL8\_^9Oy {Q> EYNɤ6w4wNR.Nl-ؒ Ex6BL??xaۢ/p6"քw#G;޶vি}5W+愈K8yx˚PTp.N W5cm&I@1 a; t] inƦhKk9 ;gt>@J9HPVxգ{=7jFϭ/i؉2^?q;U:gf3wڨ?2 fw7}`v{Ft4uQ^8o/e2Vn1ۃrxGTvjϗ`ۓb/j2,Lr=)Kmb޼^S!FT/Hyo5Qg[ZN:Zi@y}#ӕVZ/Q䘫ܷy&gŞn9^G.Xg=l2Ir6FC؞'?X=V t`:yX3BR:םu4bߌ"4[kK&֙,}XD&懟B!kϒOy/>2xlj꠫ O&C,˘sZ)j!z VjNQO UE%VO˞ői}L+!k\Qs鳵&;|TJ2D.Y2ÚqR]\Blſxa7U-FRs eJH\fSAfN8P!UH!jC&J"$[KANl4uF9iXC1Wk$k)1w,0oW*Z[b|F9'.A-`5&fkHsLڱ!V$5bs%F%A`+~>V};F_ðu]A$7e3PˠjkTfMSZ|v6&Z2Cоhc&)i] \,Es, 8R(nQ8u**xJ,H4_օ-P'fY❂zs%z%H()P4 pvh$|\ofF/|8Ҫ=P3^v381D{K#lęn gjъ푞EV$) ¶cٍW} E/0 xOg ! 0x jja!(Ļ@ՌB6Lr!0C QI=Aۏ \4~;f~e=OQ˕3W!oѯwҼH?rY.\9o :~}}qDW+ӯB׿]tquyIi/W7x\9f٣gu/uŵ̆xn7ǿ>֮D܇wP1ZaMq!Yށ 9oZ%uLe ܊I' :ok%T}) C5Tm2+ɫkK;t;=? 5l7ZюzXט>c@DpN>ihK9CK2wmqJyهXU,,q ؎$o-uʹHF R$WP' BBФtgrdixSGzdͺp*sR|*J#25q^`Iz̰$r_Af3<>tAF0 A6o[ʝҸyߌ:alC[w`<^X*g]KsIY2 |›'a8JgAS,7 ykLskx?<[wgkk'+'!oXx9懩X* ݘ+I_=B>,Ye1<=/q,/TJKs{ Dac(M?vd%m-j}r6C4Opk+d4o2QT͗H7{NjY|bȏ"6beֱj 9E/'.] aicgNmYv~%D/Y ^ƹ G|ʃ~äf o?=/t풩^tƣGbݤ1t9V;ЋMa#Hj+I_aZį} zw^AнP">'Pr] JV1޶~PVJjUӍ_ycf79SIAϢOۍ_*pIf.')^ʻWDQr;GK,ҹ:WI!rG)l躈b-Kٯm~^|zpn4ʕxj -AR}6w+s3W7g#?r+Ou& ̕=r6/gKc GDWugKBKВ{w!L@Y!Ocpo}R8~*~9s1~9sSȟ0U۽,V262맹S8Y<@ujV2}R7W_sQ{6b}@E+\횏 z^~8}o^o˞/釱Yz(gxQ/ 3e5b0$p`3QN'Sb#JeD@A+彅 yrnݔWkdz2r"N )uioQ^sCCv6)Q`6bPt;MK(ͦKo.F(>-D!ENEޢ(Dm] EQQCVU_xĦŷH[uo2v:4NoMԀչ"h,œŖv I']*E@U"LZ.N H[b-wb | c'{Ko8Sa)d2k{ ٽo>- =ʼY%7 p7J1;cKYN@1F"]Bζʾ}Ӏ:%z]+%ښ~kCh#=,%I;y%ir3@2Cv$9u͠a6l8_M^+Fʓ& ҝ)Q}" L×yeЮÐOf~2v90di=rYwIϬ } Ty򝄒.1T;05cӬA2*s[h55Lې%;mVgd"\cZ&?m:\R;5:`Z7ǔ쬋8w$dAL@s۹ئ`%r㠸ܲbGάU='z5z֖6aB*6:$dF Xa;9ߦaR_#D_ixtg&WiUŎɲcGG.᠂U*Ĭӣ_OXbcg"!*5>qiL,FvO*>k`6؏a&~u<=7|է*4x8q v?0Ud5w۴ lANnLn4j>*`r:lf_R*2;+)I= MrT#)6`zs QY}gSBEJn 1$]6!(J3-1N\S_ "pdhu㤫Lj F[AѦD\bHIĮ]5 :9%]9*n}4UJ "<=DX]En+~>1ȤOg) ]{!K*(iQVZUq+ioT*xzղ?)k35|2Lgkr><+AKrV#ܩ&lcddfDEG#Lb~8L΂@=&qT+F*ק;|tph,H6,ɕH!0A R RmV"ZU㕂%!!%C 0,uMAeo8k7]<!=Te9:KEJBٺTu.D,o8XSE TDys, !vKX6JVruQZE{Ü@ǸHa .]thrɘ-uEG}Bҍn NW2rRF"[r 9.{+=٤n4Q_; =z͛Lʐ2 g T?纫gZPhb+dv 9g拫ydd=;F;~΋ft^4v92^ګp"3Ugw |Qˌ_f2|iICDu.Ob'^S䓘j?ua;FUI^rVe)R14}=w.Ts?FǗ|T ;w{'UJTF%\UUq "i*'uu5V%j,g̉H{KJ`A.c.>+ ;Oe}QsBXA \_|`'TKw9v\caXxf?PYDT3\Y]<;Z%]Cd`;=wv+I_ֳUoa*B˺z9& [N26%m*Yq*(FzPK.]&YCrl'Wu:_T* q""tłV~x%ݝF|OQK|Z^Z'c[UFm^Žt"^V_8qq-Nq8%|=+`DlNAxf4h|JVo^uufV'j%1=NIͫ>vQCOowU*Tbepg)J%{P4'{ aQR*̲ʩbFgK6& , V,2vF;^b`mV ȃ~d% 4}1n?z6{[܆ ޻-d]1YE΍E<(VιbvmY\|ĚJ@}rYie7ڵXxs@Ѯ&Gbk5qlRxnU*&(sc5qL7q5q|z&'>&ϝ#~_H\dk=䕉ߢbf>h7"Vhw]́J,ӄ1epHkv noP)A;7vl`h&]vBߊB_݃A;*gL .50ivS-Z %HU釷_+sSaF_&DCdK}AhW|Y܊_GN_l`u?:qJ6IV_GU/~uWŪV3Z.tm\e;= J/23ر_ug}s3 Yyrp/5cuՈT)Xzoy6gˎ8}$/RoD]dexܚoIT}ܒ~$ft5G*.G+D\)xcgav׊-SvoTa_?{W֮\{2 gOSS?9c;'i~}-Җ"yˉco 9'(npf-r8c|@xMnv[M`)L;S53G"nY fJ>1~97=2fJ#W 4v*:! v?`K"Jb%Xm`Zo"幊|Zf>HIk-"}t͓%, LP?eA/.&q\(&T//Z,i#BlG&xҙzxį(ٱ8ҾW].@Š11j52bM"B"Z,tِfAh"ͫ˽" 62^My"k'x\ܼ5 se|Vd.jTcsW0<&mCeгC/Ʊ^v3`dO1@J-U#܂ -bs&4f}CtP7U0=Z?YG{f;~؜c*xz!ho6e%ؼN^HEW/RՋX^I'u[m`HIysL8y3/Cc'gOKTE߂D8;5+J>yc /zf82}, ~fu8&PP}`zZԱxK}IfSgфlRM*3*݌N}7\ }sP4<viȇQO7Jc$ۣ~2v>h1k!0&izdY^c$TuAc8 DuՊ~(ǿYCV87PHݕfL'1ѯ+lQՀhMAɄ,ڔ&{ѫTq9x-ׄU+9|"S+5S 9Y|~%K+}?2tx3b:9&yQt5'H jԤ޻sLB(`h LJ7yYm_(QHxװ^N %ZQk,2βKCe"8(\ {Z2қnB>19е3z&9H'a^-zq2F@I@s E2g:A(q6HT/!S sd{mϟV4'"P_󷳋@@J`.}V1ĖI a+%kd"WVkWlny5PQ{]@ƙwG3N3sǑEts|q%3FTϧh)FLol }[ D _ֵVk&*\XqB"ZMQr5=Zrq(=-Qeǹ<Ə*1TOpݱl\g)÷ul&ȩ9bʰ0=H٪XUFGWjv ^0.DZ9]c0U(,2V<[-&$QllwV>Vm+ )-(8Pp#8߃5'Q>SIV׌,^C Ab cR%摂A1 lPGOfrF͆#[ˎB(BI]u;S xW@X05D%d  CJR\g4>Z,'gLJ%s6Q{(;c.< *WOT}ŵ7(8򤧬-MD;݃i)iC`)yc&qڞ_6ЯC{uuxGl&0+¬cMpLu¬-*Mv1jvTq=NvVYCs( noWJ>)}$zozˡ#:ˏマ|)bv;o"Q82E]$&y\WJ[fc|xjU.u{ -u?'MoJ)mE4s%гd]^Ͼ1Ki( L<=,Q78>d{b7Gxփg?>yq7?_8z6>ϝͿO:J ;Js%c-}8.%f (dTϯN@~Cָ}YyWAyg8D˜ܵ>\Nj7~s?}j·l]Ϻ{~dV|?I$,2OW4^BOg7w'\]h~g7?|7w器p}lw/![cߜ|쏽\HrOK1z7g!xWuF l1[p:#xgnOjlޫ';찣Ѯy!ϝn mg0$0GV$9Чx`&omиInܞs>A5lNEn^Vz"ϒHoMIHxycoMg5Xg vgBO@$qu@|o#+m Y[~@P/SJ 3$'6ij!2rkKD*lRab$&0`NtTB733{o6/fZہ}2fΥ$쥪7#tK򓞟>fvG#v]V6 ]pZq:hGR q2}S7u*} Xo)|ݎ;vَm^ od}t-ڵPxcXR6~ 5F5muسM6Ԧ 0OX0d:v_y1 P @;Er{ѥ$7y<ˆcw1)F]gkڬ˝vD*=`͘~e7=&Ӣ nͦm!f,%,Am^G[6 Z'[~՛=vft~ʄ@O?Of7'wg_aEW77=04/(n 逡=g|e:|:#R67 Nwؾ!L~wy^7C6qe'v vv/ڼs{MH#Ç/%9'X@7#s@vܮcd:v~JW@p䀎 ,zx# M3υtN, v Aֆvz"IoLOb`Tm>$Ç.ڼc0ijNCKIn04uFPǫGOc{zߖ Рq#ԢZĢ&uwV$7oɏP$_t#won~ъpNxvMx>j`q[kڅ[@ j;apZHrϾ_Ќ'6,O!Q-K"Jb53ÖYi ~&s?7k%[ŋ\?J/N;[{MoCO\kY*LUk>f"-4$d-1q(&Ďxݒ64><8xRQ,e‡rAe݆]R\կ'E:ZBF<1Wsږf'5jf(\TUeA+~S6b%Zcϧ usʼ^2^M.AAbѬ-ꐽHBԡ^ (&kƘ8gk@lHEfuCDG,^.nEzA2Wvh$ *4 \ Jy%gd% ^[K[:EsMs4/{`PjTyhO!ly9>F Q!ZQke1ߥ2հB@mBܬ1rGUܩ^4(S!a쁒ϲhB1SoЯyM N޸'8Nb2␜@V޶_Y`رhb,;gЍ$ľ!#1pO.;_ɂjnxQeSɇPkЍ]6/gݰI; kckmHbsvTuu C` C}XJ%R)S͋4 x 69_)ln%YsO֓tbz$9fkognkH$$aQRus a?rDBWFۥQwjrMa\vN (ECqk園P+3%Ȁl#NYk rV־Iv CN)m]#&+aP!0i¤Pʐ nZ0-U8!KcJ /ˋ,4k @v-?*S,"eӸ vCap)gPo2W40yR4,c@O1C&PcL.R(JHC)#]HV=[k"'0D4BDμoMNqw}h#9J MNʐ֎)Y.ݻ`:yOw2~OW-16޻vY%<]/jr8K&_VAoO=#/|^EuMtAt(S(c2waO/^-w/9O.[5>zzֽx6n]Z\Ep?ū.C`W#м7)75ZE 5jZ)@p kUqgcP@k"4,+cj2&_ /Khro.Fw^(WO6_^lFh>IŃ~Wf >bzwpGwK8?|<~7ˀU]ogϫnPDu|=<ZP?77,GPV>N+~X3~~FqU΂TB-_dSN ~LeO`v G =>V5U  Ee\V{U.x{z#S>BƷ zyܪnsȀ.O{I7 r-hWO?_F}r4~{6>]DRD.cyk•u7zqs(a GNX-ٮ8_h4/C,y:̋lǭA,wEJ968c#K;.kݓM[YIMy~LhZ@Ar#lFag"seǏO#EmxY}E]g>$A}{0eچYH?ZcnބrAƺ6vB੥) t>@(;PMkX2ge5*g{ݼS&gMy kmE=V\;HֻF`eXJ g߯Snqem0o.-f,N+~E]h*5H'pۊ0:ofׯDA+eR/MtZH{,6c l%(Bs;߶%xݭufS+?]чv_5ϵZSoLShk5:Tom]$]tӯӤS|-q9bƀ ǀBhW*O.8-up$uI6 xIGy|gdՇoj$ߧUݎn: HXSJ6cpX;;#h N@NGl+MׄP5`0B8a3$ tN0t BpI+b(:F x0ưEO"CQV#" *@Q P4@ѭϢMۚuBTY e#rtkB@: J:)(R.C[_p,ƩuyasT0RJHBз(j#Uxtnj4\|(ǰG|!IWTll* O߂"l  ER\&hP8Xd8&D<0ISHp,4*lX_Fl]D`~UjǦ3` ^xH1҈]}j$z7*ϭ"0:ϧ-^ ,Z*N$ZM[ b}sR Hv5c7fj_Nl`yW2 xĂZWqs/a44}BXp2$&̫-9dj.'BinMZbIVh ZYtU'B <m1}IBTrJp# 🪕BDb: ̛+^ؒz*~穛gߛw5"xw5<=cCuQ4K֫ͻjNlCPXC85tJgIh)K4GjU}Q;'N9kG6'*by_Q, A(E9eRSw XMR,A&y7jia07RFV;-4RM y+ESy\wЧKpV=F(Ŷ>pUuXUO@kHKp|ZAՑwF +Z@bRibDfD;eJ㞻F# =,jdܟ Pe #(̈́XOb xg7xs=Sw3!y"z$F΢X*<^]SX ysyGxk zKFU4jҰޖ(mId 6*vmNOźIjX4Or%h%f\w ,t!9 u)2b\yF dQL1Ȍ˚4 qJRi.^:.PQflf&BLM!($ mq#ź3߻"`QiNj$F0_(cF8 (ֲd!C6 Q+$*e+p缸YBAx -|8O1ws~t1i]Ubn d Fko:ڷdIv\*a? BF o9V_<\` RLLE2Je5EduIKͅ@g0+(rbLN6N{ZeΊ0XKss YX+;5"]AR吉>L45 ױtdG,glyBvmCHE.IPJ>Neg!Wmn(BÉg(Xj I%M73"3G2C}B#F 6OiE׳ 4-c<7]ڨDOgZ8B-2TZ?,stdӒ8 W=CI×8!%2C-9z,Ie@T,2|U*spfN&Nܬfs(8}ISd^*7}5^}T ZDy Kf󞼥&)~QIDM|L7H G#x>Tɹ6+x5ISt!HK &If[8ڗ!2  ^z1Cluf퍯_+D^xo?A#8;g^\ ۻuFQ57/R0楷w0I@cmc$՛XusVZ5'~`\~GwV?jP=囋TV)@YSHQON?Fji=8l$ONx엢J`UVAs<䬧 f9&6@^[F\|DM~F,sHs G^ >ͯΪmKv4v_(]"]%##ۚ_=WU#RPsvL=}/_a38O:,3OWkQ\֚JUB*ZV,@4UҰgJb(._ QQD5qh;7vo{ hxI( NT}᳝CS5{( m\gY-_ߎeͅ]>E$&fǟ [%-g*t^3W~?ƏL,,-v6xw1Ag5\ b ]Ruq'Ab\ɴцN 蓒a+mAˆfj.mwYl䔅pEa r Aq-*\r6m1h\RѺȅmG4SJN7bh~CJ̞}5궴F>H- 0a4;:B26UQR(N'5*FA@| FDdE6f6R+=jyg,a |ɾohdRgC <=L+wkQ[S ^] ]F5|?E?ك\-ɘΫ}qL ND!NRpI%(hrJJ< }r?(-3Caib#AQ^%]b`'pbY9>*s2g%sADad?G8aQ6n NuEPy""~H|Ҕp.J82 N]%v[]]Cdp(LR&kS Qicy"c0R+ Tp7sHIʓRUZDI&(jF+TM0͜#Ġ.f|`,NM5(DDQT@Qi N%I&1p<`6UHfL9tƙN(z|{09"Ls<(o,N?ၜ?S7wzEe6:_o x^n>bV>/ᮘ Akunv2AKk@yUn1?~jU~XBVs*9]O-AY]MgL͌A=wA&#{ч}?T~_R!DT n<9d_^asǸc._f|zpd~/Q\vEN3PL PQRg%S}0'W4h_$BgBAB*}R%bl\C{6nl)X@>u]ЍH'F+B+7H@TH.߾Dr}p^/ 4vMBN(ޓq ƻzQ  A\w C3{2}B2aeʾ7+2wbo8?÷eWܞrx_ cra}286 cK-{L Oz;NR5ϟҤ%?EH+@Ew.jQ u "~ \׸2$뺅0{P?EFiugTU˒["cɞ[Hߔ& /=鄂P|- wjjxUۛBRviJ蕸fԐqKX֫Re~^\UG\s2(9lrpS$nEeԌsF8gԌsF͸Q3^\"RN$K1IvS.xjo1@ m$ ~Ϋy[NWGfz m]_TG Y =|)Y /-EYYNv'wͱLFE6 S:԰^!X~߾y=zz|m]?* 퇋뼉jF]!j:͆CNe.7]Nl8^' R͝"nUگ& %E& CF-(J"%O w+g'~>߾cؾ[{TS?SdES2S= g8ZpPƪD^G?e4Zl4n8jP=FCË}F)uNw;E*YLLɚ1 oMJ,3I$#EʐPŽ^QʿKZ-j)X+pjuK1JδR aX VK M0d疂Ts1BEaHq׊t+Y*"o[eCQS|(^<Ȏl㸲fU$kFIQY:Fʶܫ56ΚІ{L8i'Y3ӔTʘT"#.ALt:1#E])9eۖDT+_:RVo-(uU'i=e̛OǾZ:Nqn ޲j5u0WZrOzP ^6P:pv׷lyuOt6zT`izqprlb}ó%.!''N}zW|x?a]ց\wYt_RLn oG~zeBצp| ?[UˍKzVQVzo 8-RΔ +Uk`+oOˑLu1$dBY7H/2X1g!x>Q~4ᇤI?ZBqFӠ9 ̈́eS3Z(B2fl6n棏|=>dm1%2Td4*TDM[UY+lN`Dژ9'xeC`6PNC[h>Gr3HbQF)AKEE7༅RVp 9v;8κ8X]P譅d,QruK+b+ImC0҄ϧyK>4]N ]_3^l.%^Ne+>&!v줠\N/*705(VpfI/aHbD$Z񁖽ŻvFdq{bjmjm[KP E zspn8\p@}b̚ok-##c":QF|#˩JF6;&O|gr綢 (z)FDCνtdIq /zi<)8x̄KìV˥S5&~ 8VJ-5,HHK>9ɕ\zYJHE;>i_bEgT%fNʑ,\~Fz_pz&!vb$P~Rgm/J&ApflB$YƢVԁJ'9 DN6 IpA;#%tEp6GFJ/'W6 mH2i?˚TTpF8zv5<qyMA, 3JFAS_pa'=Xm@\xܸb,oE Lw?-1G0^.8r'+{yN6Hs    zDS[~Ͼ!״5& ޿}ƆPe6nUt]E'ݵH!{;9"ZV8ְOr3*#d?SO@]`zwv ݎ^䡋J1bܢbo*^c׽*x;v1jC`!Vwx綄;eyPuEK 4`~h&Zegac{ L #?Qy0TK/, Z x?|˜xܞodb<kRR7ؔ՘.V}+>wFt$@B?{۶Ѭ}5_CN9 A^ŲJ'?.,KQrEKE>3;]҉*c`@>5ޓ[%<+aI.ι\g:-T G—5^0x%\{vd<3!|-{ HhniBY9Jfa!L2ċeQX xO.G%ߖn<|[$:?÷@!'p >vP\L zuԢ956;-C[C6h_5.d5ĸNm9Ag$QLЄ!0CDP9Y0UT><`t>ئdĨy.GL J(bG+-^hT8?J1gij(ۥ},ଈff&$qP!{_j28+zI~bsyM$BLd|W!,B"W  ySFS6=aT0t7b/YH3AxFrY-8DKb< f|+^v&u P 9HhXqR]f ACWԭ ,+79dQd7+#_wm79JH]&.L! z5c仭GA×wx\?sQ?_TFb'ۂ N+~Yf>1[͆Trw$,&SZҌlHjvOS Ln&zL(UN {Ǒ Pɼ#&TЀWy1ؤcPւs ;*2hmme=(k)J|9ij# NqœRҺ}Q ԁW]p`&/s*VH&]F|h&/%=av0{Gl<%.ͱτmEXńJhRQ{B'cHCl P5akO Rn<'cϨuh0aoxi@ #ߗyuj ڳ5<ĸ7&olGbvjmGT]YlGP[9J5Uf zkHZC@fKokB_;0m8XC^CVa%>/P.y YoH̩SyikrZsۮJ 'Qwjƥ;j(2CHSVaֆ$<5 Ol-FRW l9bNbTwC:Vš*ٔԬ]"4e rA4FTeECȥæԫ%$⺑z%"ţ{N] 7c5q/_wAcJ>O,eFdppX2c-|Ɣ֢ ;ep*GI.5R[2Y1APzAnM'y\LƩ[B'*A%XK@Z;w7/V:m@dgrt}q,r2Mb/Ic'o34{u!UcLWglocWJRJgp4T6)ଟ}2dVm`d 6 En<$/Yxݔ(r9q2՜ Rzݜk)5īkEQ>ߨIm*H%lpAeeٳ(FXE*^7F1 z៉ Y6peieAUX (5 ՝^? 4=c{ֿ|#_CRPP:pYl2 (gkh A>o՞7D߶ զW7%/<샣n蒗_ȇ#ѯ6m?ͳ'F 0 Jf4~ kAjQ_?Wr2)"J]1 2(q═-RZ-Ŏyڒ]@*[:s)u4A=UcnH$ Di7~$hVCRh *qV:;tW$Pm$b :סƜfb 1EZ,< FFC^%Lݾq!Ǻf0Ν͞?<îwI_ōa"} Q"9>n,^M,Qݾu:WUΔ,"sG=>L$2ENJ?#'_&՜04t> <ٝt( ;{'q\Gdp|4Gӿ}`^N=7}>{w`t~u_Fϻ9&—'+ +W8+ewEw9  {~tv=|=,De.5 _ʯs> ;/O{cw*tT8|}嗦,۝~70y&hܽr[W`s6-21|>bTҠBNXK^Gob_gy!:2,.=woiX ޺$` =_z]sy^x;Bt~Zi:L)t~*b~w `/G+gs?y_~z1 ?v~!'C`% pWxf!g<?u26i6\?&qyL0i҇p_(Zpdt1}=|=LY8477}W>^KV57烳~.\\Y0< \q%U^H}̦ZUiHqNGn2;NS'aiˆʐL$"Na)ϝ|iݫe*x EXC L;хsKQf ǰ;» bVl`l3 Le5 e,ާ`!93z8`ԖH/0s,℗ְaM%8.m??+>l\|cWe>Ɩold?Ic7|coo q2<6&Z0YӠ'V||%% ZX6Cr#r#e68#+vhUc*E{*[Tj$6Z Ue R^bM~PdP&%mDΈ"bʸJ,V>arH&9cFuU<3&zTfXYPX(?/ .9X<;(D~ac$Z#y~R ~nWpɮ(4L0RA|P+ܫ$!@|A=M"ԗ֏,F:H!}L8w~@qOϼs7q&ν9}C>djS 6q.3 CIT*lʔk~q*JXrĚ8:[r,l~ٞ6 s*zN_s"+'q5VdcE6Vdɧsu)UQ4ĆL%hpBfbl˄TX(M2M9fY% $ k*b_֍}\R~OL6跰>4}c7}Oo0uJV ^B}F}%1kcs r)5G/\yGdpO6dcK> [rjlƖsw,~`J,U*I'`4ɐV1#<5?2S,ISr YX1=iƒJKX%X[X 5lUK$+Y7X91#&LuRE8j\$CfS2ҒygaIưK !%YS$.\7K$-I?ƒl,ƒtHoJZ;LlLڄ1%4aq\BS22W2gҮdT˴f:4 n_C}mikNkla VSsR|7z -25S`Xz+' jSA{FWTI=T3[06ܿV8U*[eo \!dˁ6i}9 tP+7-s0L=og44; NCCrC̨;H Qr$Ab,8a0XgCΣ85Vp!G4η C-)gw}uZs D'hMlMZhGr*oc76 ̩~70V O+l%'9A'N@f| |$DfRN9ŤTi*VH3Avz;.V|C$k1,a -leTlh(tQ[+* !W]`1jcX]PG%P\u Y3 D<2Z&2/dO!,";o "<HĪwՂG mCC|7K*B ܜ~C%Z!p/wp-B4"joɁ"R X\op[V܂,MZP$%KuHGb(^g`` 5­Ty ߮#4Dw+.\8w]bbX< |[SVuǨRKl84 :e"Y>[3weQ H3y>.̤Oɳ ?7݋DDpRn8o)x2%/ DΓBwzxT~g0Ee[ LSn]a0 5ϔ\q.o 2y$4fY4o[9.4X@XO{ᜫR-Wk[ѿ[xsoVsz0Wn) B^Ud 0Zbeg/.AzLA2Fzq7noחzٻ<axo]gڻ;kLiVOJ}߄vF9; P䲮h5Dh2͋͐j"NY7 R{Cyг'w?\> dwe=nzw0Sm4A;$2/W[zۉ3C-%dU 9-U;<YhZp_"W / 5Zn#l>? Z#OGA|XūR? gѵ$g,eUZlW2v!VS0b,QI*GPxkm!B; ŷ/ !QCBT*d@ W3͹LE|QY*%Sfag7_U-)Pq;wiK տ&j5Maphjt\PNuw161Br]щ?uw5^VE: ѯ4;³JaL?PSwAFw`GfKGI1;-d A}Z8n1kEkrza๦M8 UBZh``4J3 [#{.Dj=s$t4O:F/&q!*z*?J*6~lC]Opúcm-$9 5"tSp.+T v'TMu؆< Ӳߣ^==H~aK*U.2 }Sڢc s\&Ӂ"w?'n @C|C(钀NLxtK&N>F2vn gR>&IEf`p4b4O>\l\ruKEAO'-~/̷ZW6iTTʦFD;_]OQcَfb3Ϛ>贘:"oLΚ3 sc- dO9sf^5;1L|`$;6ۍ5R` ,FW:k <Y MSx9( IYt\6o>9z84 I>vu,v5+T~9SGt)gmTmuК ӛZ0y9C.:?{7(]=|9 >&fbV+V-O6պx&Gw[Ѿ!KdV>G/ϟg/ 糧΁RwN4-qL?65!s7'?373Le'O?yFՃSYݍv7aTIV* 1}%`38se*%[Oue';v5ٛ\owz:48Fe0Hm3bz١?{SXPhf^epj̩,kBtu@{u`|^ /J{lV2m 6;ѡN$!d`BV. 3&iZ]>Vit&S #YeSH&PZ#a;E*OK0%bKc2sR689^ OCqG\[K1r|D񱓸*ĉ mRy<[՚/F3^S7ouZ$yKڰ%Eܾc/1iNo 4he v8>Љ=nq;/R!.iWe(Wzp06ВЭ-/P)v{XjlXzq̱ɗ}I~0<됺K^| :zytyz)=cdRigR8uOk]j:GQ2uGWC>StR { K_}KNet]\Z9=hK}iG3)w;kXv=9jͥO*01{)GVlNRDoNEG"ˁXwizdV7N;.'SH5(7k~7%\Çvq^ysHeHv$(a2ͬpn4Gc~[pG1.⯏j$hpA6: 7ZߕoƇyn7wwQ[o7v!>,Xl^`/e *GPxkm!B;:n#^ܰ ?,͛2d"!-tq6$b"Jf Bei Kuօ"[8*qh>$g(+l:/R@DT~7PN,U̵BRN_I_q 8HB!@PB .P@"UTc1b&9HBp%q `*@X\VJ`-A/ ( PcN>T6Aib(b /TɌhguA%{ɑ8D%;B3)tnBg*( Fm& gn Fy/6՟g&1:_41*61eu`T CMBuɆd=M.*()7P_"gqq߫σEs[Ç-P  +$:QINFxD . 6tPDPg:p).i04P'6r=Cy9nf8qe7TUYey{,@o\+ hgczC*_6_5폓=3T>j5џ Q+F{N'1WhNݣ?c~>Qhj3on^&ab|1сz<[A-6UÌw3=ox}■ۊ6q2rB{i)%oWI M9S!hT6Ik'%ENdHF- {H)߻M{ld6jf 1` p;1&uJR$Vx-P)/ 4ts2 "|Y PeyʉBDXI8%s^ 1v@2I ys%G5j!DQ@[((/B+ŀ;tG R2 U#pN1eNP셴):vQ ({Apƴ',DӴ0JcZ$+Z7bBoPV(Bm_RRƔv 1ɠ) 1:zA=*)hٶQBywN4Ҩ9 `jy qF'a ;ԋ(!)'-e %I\qQa}@8afG+`t10ꈢ} 7v܀FRj^+ TްbHk{əP%2l98͋T3/jSqtɉ:U`^ %U2(Hjy@qG%zC1nL.-|Ib>EV4+—E&Z!(BnomEj§(۷§(ɋ>'|8$O>VԻ§Qt# P@xS!*c8׍omE@d榍(ͷj+sPAWoͧPhJd ב-Q ̮+L܏P5>ȒTⴱ𭱭jS*+?ZAUMOjOYx >TwPqCtBTˇ!R.zS%4=tmzWN( deomURp)F *Umu=)tF4S]"+B(iMA5jz58\hJPHXjӠc-{+K_knaZx6_f{Ĉ,pghYRrvb_5(4z1 CQ="[J\cz>.~OG 'wg% Z`[ꝉYG `Pƞ={-Baޫr1oNbLW?{{"=r3,T"aWlcHoީC6NpPTt3?:BFM:g}P\}}0:8;8జ eS^ :V'`*)J +A,_|`.7'rco{7c0firPV:_Tg9erL:E1[@bAW8(#9tx,F}, +à z#(wR#~ak(]:XpRl&q7a#XyJZZqPV+lDAʜ79y1/-O=wo0hmU1xɀ%x#P0q,Zk+ᢳBj̤.#.dUr_Hd?e|kpZHu By4 yU|Mv*FRu*,Mi**Or0B`fڈPd;0g@'b{; C:G0= Ǿݔ7jh,pO`Pk?>gRi2bNYgp7abv Tk`f[e[_#3Q0?ÿ_o?}=}>Y7C0rAoƾ=Mb7*]C?4RA70.}Xkc#d/7w@- ; t+ \4!\#z ٵ!\uBwD@ ɜL̰DiƝVcB$ت (Ax ^fo8@nv%^onA&`X#?@.* GЄQ&U 8 J"}u 3?*ݷ!2p}5Aq/9,qڌ*Yص ȧ:a,魏gLJ ^qM[٥SBrs7[R,؇^eE)yi%8Pj9Xa)cBڇSvvNȩd#:,q"c4$f8o*[O3Iej+lbbluUgV_OL:?00GF7R3*t+V1iռW!9FaLytQ" `$yN56sĒ-5q^- vbTe!&G]jŨ8^W0gwt¸U00uϞg]3!Kce8廎wnM:DWyҴBՌm{PnW@qЗ-C4/&j*KTXaAp>p_.,re.#_)&A~D/IOYg\OʐO|謙 9^,3ެ츇Dޫ)5a&BVnKB"DXU`0zG=쉘bmktc 1>(/f <[‘bK%r =9: ^L{h-2Ƨƺ*s? :'6˴ o 5X 2v7ƊܼM'7E,h\Hnʷ|)ߨMy9.x?E ThQNo{33< aY @S@4AU( xK=ƘYiI:>vUC|omJw@(j*ݮ}3BHM#/Dv<+! ߓ# n 0/n''2vEM;5q"(aD@c803q8ٕc'GB|~)^]8#u Fv,^yzLqAT;߇(LKNjI_mޭl/:kmp3dg='okPk6~y3^ Fo6ۙ= pg$mf gGkr PQ!:1TCx_-ì؁[` xg>0}|lŸ) ^f\ $Yp6CFդ;C4Jֵ?oL1%4DTaMW#Ĉh Cgᚄ }we3o.@8֬]j>RAl-K?s)\ǦqQ!jVǖCsR'JTTfPҚ1՝fkMҀPO8kِpzO_wpo fr&Ϸϳ7,no;oܶEhZL֏| Y$ c /kܯt6_igAHgid-avS3G$f& 1'f?ӼvJ6hKd iM N s;{f3l @Gf-yQAbA䧜'd{'g&RekٟJSW8t oqxegPE\m' {ǂ#3"VbsPIahxJ# #K-8(cD/',Tlu0lG4 fmhimֆZ3L[Ӿ^^b`vk_A8+T>0/]7L9=cX֌jpҮ6nj@iD ޣ YbiOx\qNȶ6DѦl] ʲv i:ZJIWI3Nv`]8yRc!RF0j-X\\vf:~_(mw#sg Te_kW!RW[v)AZ9&;mZ<H\֯ׯbBG&;@UZP QQN*'[A>p>T|(UPPJfCb&և ۃ1PZ##1A1sĊ$P !$Da(R"~THDFS!6DT'&!TRd6Pt$ L% S!v\Kk)Ȑ e"c7s, 7,K09M$`-KZrY\#kh^ V86ODBI1-۹K7t:;Q \$?i>twO9ӡeϥyy&ם/g!1J~l9p?Y~)/Gޘ#OEeXL}`ˁ\}#zgY҄~IO2.OHg.edc0 Thv+˃}DlhB"n WU!!)mO@)d[Y#:e8Dg˳i҄ݪ\DKTqyk,#Q>SC[}Q*$=9v+Mڭ ELq5Mcϴ,mj\L[iBnUHg.edHMk܈u`6fc!9WTl>40!q"<2 j)P2rl s <t>dkw9phswd,YB0PL%j4ITce2꫉h@J CU$}p9u}/; 7J Z霤ϥ!ӏ|?Fr>L;]N;b BjSJ r%V:A}ʏŇDHQ/'TVT呰+1x$v၄\HZy25 !qk$U X+y$  ʄ<S!g0F[bJT5%b<d`~,.â84u ZgHk1G_׺oYc}"*Au^(lJE%~+ߌd|=lרܰE(őwYFq'XlRF-/֨DxH} cqU]tvwY %@H"b!# :eBdP֓c6 x^t1JK,oyI0̌wBZ͋/@T.r^"c1FaY{,Pc(!՚}Eۤ/& 7&a"OaV0vI"20:hDZ` .5-fd ۫X`KrRFAJ"`,*@مl õ[Ic$,fZB4dEadH}36B`*#E0ePYjR YfmSmʤ]_ț0*!E"):k@O[ժXTpWO儌3hH80ό6f;AE\"Y!Y($0Q)W @$ _`MʙG1(,n/e2$x"f@57LcH: d 6[eAxJ j/P PD* y@6DC tYkAMf]Db &CHL80;8Й"KI6 U VP+\G]JN9LFf3(ƀ(Q.(ʨqU </:^J ~J:vJ1nL]JJx5+(H>1i`p%ID^KtՈL/ǍQqjWSXX*+vu" RG0 eX8a1:3[lj4~3ϴ(y]'褻F~su±Cq|1zH.+C8hr?<});F02~813 c Nzڥ^lTѤIX:+fT@berS)p\洠5IthKYXT5I$OEV'J2A~w`DHp9 ~` ì 0@܁g? C DtIRT@k&o|ΐY3WLk$)8RYx@mJ)%*\Օ2^p`FZ:Bf@ĤI@e |^&JP`.T B+fa ~t:q1S?^gWx679WWfD%  T\TQBz͑$0g2fg6*U F7(nbP(:F2꣨K:.J 瞣doѠƸA06[y0/\ျ!(A/QC]HC܃n8DhN>?W FkQwQz>O0 B' R.Ѓ#U2j6Q!+$OV)eKxrpYo$r %I^k"< `\ؒuBJ"8B&UɈTF`Uh:@>ErYMMڰ b"ѱ\{Fel,2"rH@*p^P˜"5l'0bLjR۠5ҵ ͚uSlIEX'?䃨.%K;Hh Y+0m@hx 8qut'?MG%<5s/W.u:y ^a9^Lmk?*k=U{^Udalq?:][1ӋʯNW=f]._px>kDQa{h]kSިm>* 85m=V[ozH[Z$n--Ԗϔ4akwk} 9JAY'~q~ aR6ķ :|jYo!lS_u/P΀ݴ^i_[ __ɋ#y"[(meփ=6=]}m"M2H'M0bM2wTU۟k`D9vH&_gͅhԔm bg_<~v2aUqPơC9AqPmӲfDNCjL32|/ow]_O_Nϲ,= X/C.pZ?cΉ5z8>dDqփᬵս74ڋA ڡE B0d)"c_WgM|6sX=)%/>;Z#6 8F2'kr643Q]e7|Uj6cumnr}b0_uC8_8:8^v}; rv 3]V7i&([Dm>Vpq/nc)hee+j-Ժ/#]+'1t"^Z9,;/S)_ct|6s8R{Έ T[*R Ǯn18* =>Ҩ0#23_v_/9N } ` AS9;ˠq> BifJiӒ-m%',qƇyb| }ByLa[_5x% uB26hsiAto$6 ՋAk6B2C\4 3"D+Bh RmYA4Ĺ%N߮(Zz9CmG_p#U~(hKsPP(h[H  dچ{EAĘҞPЖP*^W)5k[D]?l9Yt{] ?tV)1=fDӮB*ON -lԇIu>^˃ߜw鍁/<,&9_@5٢o%vwqNGmxwyG 6gGa `I֪L6` 4?Gݿp ~?]<??pjz5\sC.y1xFgm{!#F^{/?!/f?^:q~y6Z\xR'xW߹_C/wy@jGҿ`9 . r'3ߋ3/"u,QD2VSTtpnr6kTɐ8I,4EֹgoFg~rv~1G JBX?@M1L#iR0 I|m_NhybNqR|%?ȣgs?9R{t+-y1+ :˜k?ټ˛yܼNWڹ#}f)/)ojQofͯ$]\Not׎oZq3$>:廯˵t8?ȉK\Bյؕg|.VH3^Z&rfY-Uv-.#HLˊnd0[[e3M0Uˊv Ӳ@l Tdo71SƓc<>gxo|-Cg벴;Al䖥Y B|ޥݍg^]Ξ;'fշH{eMߖ1Y0,hPݫ:륄cXfmö{㙫Q<&oX5ڿ_~ooVۙxOqaˇguGS(:^~84#VCI\Bl|K"ڡiK`;m%0ͪl⠸jLt r8 SRfF/T͆=YNйm,nR\v5ylu/AMA\ on 2iC$ kMJK)"F9 YEKIKA%# c,7[ߛͽs"8>%O}yՐDpy "@F?/`Wl1wF̀j@oXki:6|!T)pY(3J.D\ X+`:),,:&\"0ՠR5f:c$dD3-,r ֑|V_ @KRj6CDC4GRFhV͒  ^ߑ@[S()<ؕqdb "(Vk`pBz̼$358%@H e}Fju"2ʻ9̀ȲI0H ҰK.jRZyt=Q5D@<3J\tWh/M3i;%$'7cF4:Ө>ESK}wә>o:"(HWG56Xe Çi7fjI{xhL!֌V8]::8 G]ũv𝎭K>>zѪ ]}M>|w P}gOճ߳|WGg}V/7w}ˋվ1Q׫݃ſ^m}yp[lxfX^>{ 74_+mB;mMiKE[kKVp5[iLJG߫j;KVG&M6cnѪQx:;C.'_-٫I/:*cΆ 6ۉO~krOmbrOmdd{os_{~$DJ 61R0\162㸴JJ_uFlaki)?3 ,)'ֵ֓/:䤘r9qmwn2[ ?iŔHԓ:{ JK]ڜ4 7*@y}J oUS88 g?MAk jw[oAmK~ޢwesln9eslnh(R~4K Wjz5klakoiRߝ~iK ;S8R~ti)N!6d*,Dw.5 `pmzɺ!5&r1]C"5]Qwm\zT}3܌ŋ2R-Qv :+}Xς3;LF[췭>`#qTG T5ql]1 14E<1FIl;oS}¦As&%K^{Uj51mYڪ8jcUR {k/L㷭>ɻqmN VƕF p+b2%iݎ+|+֋&VKYdqm#-od[XvAY;N-vŹ6sZ7¦n\g*Xm-_Sz8izZ΁5D\d_Tj`x5{> Jm/"*/}w÷o4"M}>gו'w0d/*޷ =c[6ٵˁ{A?^|i=3nnM^۫o_u. ѕa*?+n};zrk`jg()=8_lF͈y;z0y~zx|5!crzS{7f^|K%^ã dV`F_W<9Cz3KË8o~r2-,6h? Tb\uˏ)\! = c׌ydGV3ì\@@O9͞y௯(ͷ/竇Y2hQc#وcD:4" AwM+m084J|9OX)VF<=P e7Ʋп2eRH#C/g5,Z$$]t-D0c.'I4HմQ]Jf&$-P//b$T8gg %44]NgmcǓ``og3ȴ)M8Y4n3'Ũ=13`tMli~ m RMiG$jdt^7fQD &&nq>W Ryޞq[R75~ Nm*mv)!Qn,S$0B:C9פ乺CсZqJcY@vDf` B̽D|L>i07❥iqb+#ELK#@|#olL hP˘IXP3ʿ+ ֑k_fR] H'HX &c}qFv킚#3a?vE HhjX׋pN@"`Mue2c@nB'g=G[S0j&_:@&p0F,e[ yd"&3mty L WFyy|*$-c|Ũk3LÇ0<ٿOK*8:55U!c=BJܿy:m$OF@@G G 30wNu%U)Ct]^QHO:d*捻Lf^yHՁ .XlEk{#/q&iJv*| tv8RH'~a.g-g3$볘>w R`LK~1XE‡-eȩD1TiBels<@a BS@"!Zn͵(l1@1xh$^$C 6 Nt_wk?KS{JEeqa Q^ Ԩ=D%?o$2E&f yQ 'gDJgyz,9C'MQ} GxwR=t ݾSv$,'nv[g6PB0O-#NȒ4pI~<ȔsSv&{#]i6g`ODʦC:ٯ bE*N%70}cuCD}@tځC<]֚5zAs_[I8LoK9:!$p4;4+}|T%=bdEC\tc9P: 'p7MQ#T.rOxPvͤ%U0_NcRAHMH$;ͰA̓4C1h^9u?SEJvQe35:5o!'E2H:jkߙ^.@:""Māteisꨋ]m!й=d u9`Yb҅]jпQC8lV "$A]Lut}/^P chh6d@=CgS>PO*zo /#NoMȮ=btaf+=~4P@ @D&f`puMh B+KVh(K/#1GvROH$ X:!!aڞaiISv=czeHv-d dTkyh^]i@BJ8@3K3b%)Lå?ɂ$+=)"^PSAZ߀k*Oܑyz><6OX0zSgb7X/V^:rn]lIhh/?$"]SeOЧOzA۟H%#WT9[ ceֆbV0ߜ<1P.nuü'FY&x|d>0. N7 oΎ//G}|z8H'ekkaKeU_F'KO/$a#e)vC6zTzqJ/RtIJS|0,6`\` tMY|yK%:}e` c나C=ҋp͔ϧ1HaU) H:-BN۬6עjA-iVlkWHٳv;tp [\0'hGel  0y&xbЖ=D*-E5<1b] 5p˳V>VB-) P*)xdV^p`*?°eNӪӇ6SzOlb"^CjOaD24n?D`#$]ǂ*Irtܭj7d=&IujSĻS hK]T^hu  W%  g ::YF=d(rf)u'xWI'@r?U֌di4"@P:Εyuwwp}C짺'%#D3ng] @+¸S#A1uZk"{_沗XMC/t# I8?)1GgHrz+J&3Uʸ* ޵K!|I}ISPa؁ #W.TepB42h^w.xՀp,ke.IɗDp)[tkvt )X>Q{p.\0T4_Gk0I=e?/#~ڋVYR[u{s$Yeaǿy-a%V#ޡwwFyY\m-#2@"@ۑ7eyϕLߎ]y)33od?']U74#l cJݷ_ c,Ւ1"|1N XoXbŨDYJ04 Y}_x7Evwc-?Z^섕*"LuwwUՈۣGl<9T-aJ b>aT}J"(ZklT|y'!xT1@m-L{R B+8 IBڦK>=\;lQs᥺mR{(`J=ؓdS&mps]S9^T$l֭+ց) ؟f N8 6%  <(1`83g EC†=ڀBF9r2ѱJÃ, 78)Vjx>l>#YjF=+Mxo{{^ੵ5=34)t;!AcJ'02h d8"4,v]6o3ש777!1"DD4@@4FDKT-814Ab %tԲ2 );#z>Tbk2`Mo-0Ax;d7⍖SܾKL9 uZ 5DKϩW^$$})CcX}\%9V)ʣV^=^k0&JB5^H eLJEc7Q%WZ,t>|u#եO&ͲO#/'/X uwt~nk)4k(Y+{c^'ҧ~1կ gUWzGn*Y)3V*k6";_#;%%3|]4+_!3)1 ʀ <ǻoqݥωP-lA~]sjv榅X?IrO,rWYG0_'F\NCޯ^eJCQ<'i`C$ Åyzp16I#s94Oу/?l@`("32O/)e,a QuM/mŷ. zAMś EL@5^_㷛{`ڣ׏!jni>[\*l;{b4FqݎeH3=բػ~Ni980]#/I =#cB~̃O7!wXkW^rM(y$#W@mGzߣZũZGwN0(VjܧG}[ƣ7U~ЕVSEqa eJ֓h{ŒF{DqwMObB[wVZԥOn3;F_OqgK2FY-3m5deoJuy:ྡྷ/zh&ǁ^PWK/͖_?N^&Zy耭 3Wh%l '4}}t @phtOf"ݻcɀEHC>{Ui])i\~<&#"DPB$I("_ c!>22a$$U)e IkBNAOd|1Qh!*c!2t:  E"bB !q@]X@ɉ<׬Uu9ҡF.ˉJ3G+Wn6lqmĻY/E.wf?;1it?#I_v3-}(BIؙbdϼCQ1/liB}lQU QQeVUY|{qĐKԣ@/xyչJG *GjjM P.Uw&xZ= ,HS~ #Ǣd8M7l1`y;$ yOO:ٱV_ɥ-L%][z\K&mIT/\1.mzŸs?QXG2֭N϶ᖔG6Zi&טZQ|lLj?d՘SJ#գa-IE^u;>7Nc r>ksʥ5$4 X o=/J>'rW(SZ 0z/ Sb`˰=xȕXRؕ 8}(˫wwc EOǍ?ij%@ k=ފaͶ15=n pί)YU㹷W}+ eXkůyE{q̐scƃDPbLv d%CckJ|P/"plL8Ab"j1QYy\W\&Lihmם*W3F-ETaw-U튨BATE7? CŸ,  GWt%%A@ i.`)֚ FR q֣|5!" 9ҠB+֊Lbا( ByF3ˢDLSX FM!Fz"$KVt 0h0cXT942** S$&XÅTX[涻N/Z d1B7:1boAˢ3S9~ 罺1c`^oX-/R=sBTmFsJNDQ'!3u_ScxZIfbhVE5[ª貳l Цƛhb%Rna 2JU$FkS+\cʂ#5 5lƌ)3, R3^P8Q6Ack5wh_de 6؝>)A a&ye9GpEX# ǰ@8cnjY%Ŝ4 ߷#Wߗ-ٔ"PVQ7MųVT1/0諛3s[>g/=Lf◶.BP=ȗ׫ʙgnSj.`CB\x%UwGޏ( vQ}M r̴(?p/6pA2Sdrj$٠צ0s}|e]M!٪Gs!S8yeC_oJo9$[4.r ٥5gYIbԡ 70O/c~P].'˫H R2 )'+`ި:Ekj8uGĸ~c,]Vcm*%8d+#, FQ̈́܆F%RR޳a#Լ;H$tIa%:j6U7A͏WY{*;,Q$vViPL_@\l)r;=I[ &M=xs`\ J/"1cf1uVh%1zD#Vf5<뉧[%sbrX.msw)̯dx|"%S9nؤ9*=">8 q,w4N=9[Ҫic7v1]Oہ Md}V7ꫨ\ւ( ~imyjAQ;(N V\⭞z!ꔦgL7WՇſk[]8Ëӏfvz6W %W;*eV!SPx-PiΧU|'6YD+PG'QQwce^UM~7sa>Bd?GuM؂ 5YϮϦ. = {[o]wfJK"D*A9f;G#-q1FE"vpL55oY)3%n9̩~yg6n"=w-毯J.VU)o^&DD'n׫+37u?LTOtUp{ԚTfk_?*sŌW+To lS,椙`&1O"am cF*AQ"וVQc xb:)Ŀ4Zo(NyV8pᙧs.|4p=YfA rt:F/>ddp9>Ǐ[fI2'I+(Lx= |P7%0xhΣh@J4 NX1˂XV BC)sr:-VnӺVlap[QL//}xr P3)aV6M R."C-5"Tִ%dRx2ÎmHQt()!ՏH!hBNV0%J1KmCS ܢNNfʱծX$z춵oZ%лvȆCow'ک0թ'=-F"AI!j>IA:JJQGHZAgrBE@2haa 3* 0D8`8Wi:ux!8'LCt>@s VZMje-`2b0뽆ē6_H8mAʶ7|Zn.{m$3!Rq٭$:_OQhr[eu,0JzK 5?>ˤԌM^q/W?>_8! ;Of{s/N[\-LzӼ\R\ றՓwwO_R +D|bgL>SBZ)TeJt),9"XmI쁱h+]kwهWOz1Ì IΧ#V0~#X_&H>)٫/v'lηW-sDz+K\./.g7g7P| ?L? \-o_t'CC@Egzpa=sJQmV# ^^޿np.N,ln. r_¦]cadT## |flzw3D׋+d}X8brQ45 u7f>n&\;wd',\/jr==8}9Sq)/>| F%A6x8?`ںEܶ|mtJmv9}z]@{fR6^{hZ[[*ty6Cs +#SzVOxˌSQ-(׊Q%3!LsS WFZ'1I&!0M$D DHdLJ&r޽ݵTi^XptC&8v-rNQKPDt q+T"TVy_?i5#J}@0<9q~C~q״_~t&aR WM&2pe$mPlR1HTXA}z.WuZFg!br_gaoo~xSםY 8]"Kdk/Xj'k֥Pcc5jRk!}IH'ޭ,#|diA4A%meiʥє"hJj+L`t(kԎz͵JJ0HG홤I]b,șhH =z-(mLJѽy_{}kK[|˕ʳ lr-D~1lj.=OMtd̵YOi0YV~Nց(WEQ|(zt/O'fdGHPiUX6u>8gφQ6|]?2ϼ֐8qlU,JPO,7Q/*љn >r@{}r}ou|r(Jɛk}r MtæxE\8x4st=6Y4̦򿻅僑#Y5J.M[RUplGx1,|Bxy`q$|7^bxI*oA!ݫWa%d?@$)~݂>/3%U֘X<:8 }iP`]Jt|%70ٹm9li 4L1rW+9{wlm ֍w N"һ$Qc?3ǒbf: 8-'t'Of6>_ݢ HkRHD 01@':)BD`+QMVi%:/u6IIlu d(dP "6Bg.n+z * DDϢw@7$zSWv඿ bxK\% wZS0"k+0$@ۢ2)L+oICӧ#Yɣit7Ǩn': VPj?G5 =LFא4fC  !5j }R"T2^0ٟp{ۀ{2nH{Zx;x?]jA!FGBeX+P4vֽ>JgmX@: 4%@($4 J^3]ec5m!Ø`a š)M8,oKى]rSTP#*0?o#!-1,]V_WvDòҟ7Ghz {NxGf,"?>y[uo׽u^u/г$Nf"EcGA\$wPpR,>TR \g'ͻK欥k oCђ_Lftz7~%IU]A6z*aƎ bRI4q;yZ!I!:ʬEe4@6q *"VOnj}:^:hO*JКN_ϢO+lI}u֫ٔ3h^開٨\!b1(Q B2)Q,%"&NƉ\ypaE:Ub^ =QD32i\202k@fJ;JоW!&2e(K I ӫ0zi"4yE>cϳ魻u?:ߗQ.pL-^>Ee_[~j}Sݼz<ŵUѽݾ^!X ` >Qxlܫ3-".ʿ3VS㪿3YjÃ}upW,\6>!Mߎua86Ѳs#ʫΎfFC_`Kz NŅw,[& ` )Z)Z$v!M;w<_ܫ-_.ƨ1C" `S5h䏇N!fچKwS H VtF= =ތFŇ{=ҟ>q\Bs6Zj9=T4+tK}8y\Bqo 9kbz_Sۊqz ntsR [n]@1MCxlfZm0+ "R [Gqw7^sdD8.Vz^|4|^ʖ"QWט̧FWGbߡmըJWxVjJf8WufLj^;[X(K_yxr{Y>N/ˍhɍEϊ ^}uP8o/ X]UnS{,EISeB f)'[73vy MxSoJ _oo~xS z`[[4`նFP4}: TQ%a@K4D~>L$HY,ݿ?n>T1Hu-긌 hYR`nLV,zW.K^.s*bhQWӔ81YM@lR$ge(6]_Cx\,pGN4],>^xw\ @ 3s7/f/K3EY<?zM / Fp6 HX{+>FZ-$AR #&DQ'꘥HgR%(t@IJk;'΢G0nqn?UIpJtGCV/?M(#- bcOcoUlWk r I g0&T& A8ala>0/\>Of;Qyr`?8.V+5* 9M9ĕ<|Jio~SD@T\|~ؐeȶB A5Gm!7ΐ4I!j/ɛ<`$-&?%' R(pd3}C|},!/ c,3&p*׉ <> 2x"CGxݾ{ʔ7-o\ rJj3UIՅG`DM1,6' #ec>0W[TJ;QMMRM%?:BL4F b-1$&hFz1q9p'x@l*@匷®r(/P)g\zz9c[8Ji(=LI~*gPwR[ꤢQs$7_ThqK,2kݎk7j_N޸?;Q 8HՏqhL%c[ݺV.-Vh ӌ3@ FF1*m` (a*@$@+HQ$I.2q49zf߆.ekt.˒X~lRƏ6k~Q 5z=8wW PQdHjSF d6oF٨KP1Cf+8O): M[~)N )$%D`OLv$dx8q( 9BfzYh6s$e5vpu9wvzd6 z]zQS t?G9 1mfUVau;6(Uq4&$ qLRi&HW#*7zNy}ˋ !a" 7F`"aNOrZ$BFgTl1`TiƠ5óB7Wv) a\R41)7A7k&{eX O4ws(11JK#I#yM3T@*][o7+m=,p68`"+id~3#5-A"z*V0Oi% zaTmA󒃛L&&Ԭ1|r!ut̐S$s'*.?~u%5f/ ,(Um9Yx"PpC5#UQX( `AxE@8`>Xқ(J,xӖPTHոrV|*b&wb2c FfQ%28]^a)hCWrPtl)U*5F2[p#aKƹ-Ѩ_Yw>6==@[,;''fT w}).aQcxA(Rp@kKK`[/qNANrglGS4RLAi6? 塀ٙM͙q2~2O"gE:5@}YEvGT0C2ޝN:^ܢAtn*W{_Yik3+}f֞y|ںUkK"&용mm(I|Ҡݚ͗x©e+JEns9]=;WwOW!itlz`SxT \t$hޛc)QRTGE޻) шW[w}.4Sn{Ƹ e$Hz:߆J>Z=;wO(1E ƞV2GWOEwOcjޞ 6k4'iEx.יHi9LisJ5S.vTl 8I++(^H1V90#XR&%7Rђ)LDsV͊uUj>2<7e+BS%u)X}pKބuuhPfY)9[:r vyV\(?g c XqIHbc[.Jgt[,agT,XJ (*]7Onm/YJq8 | X۲dvBNbxpt-l0?@(9!AKjTR`*e(I F#`"+yI-e!A8mD!=Z~"P!QLNi]90aC@4bh.A^;(%Fuw`™jnY<~%yv+&f֡IڍHcc\|3W mPX_"mi8ƀsԺD؀ν\ 0sX W1`QYQYp+4P*gF%]l4HoC#P,#hC89=TN)"q5J0˜tks!w5E)3뼒 ,ͨ5X#p0Bb}ԚVLAx4LRjD` m!MܨF!v g@TYkrSsS]IN{|;̌D;nMZ%zK?C$1^"K' iy/tK-7tӽsq4J'QU|%g|׿?_*јl֙:GBSBajT_E0G#y5wq=bnN8,xalklv|0,8?98]|s \NU\ecf_ZEe1V3/wS?_~ox%Z4{j\X &k܊iI²? +YROJڑ'.92gwՕ![&I9F-6ARsKhvkABfCJc=nRQnNuۀmlBS[E4Klݲo645A o6kjuԘH i̔F3F~d X2gҦv*w+nNuۀ~/ Uеv&4U!!O\D3d \j4z yKj"-vs&4U!!O\DsdJӆ1g gm8)õHJ81%l.S{*RRFMYSL j5#H[$H:[qdMiDNRń͈RNUYJf/;FoU<SZ_E4Kjwmv EtrǨ]klBS[E4G4V8lq\ 61jnzxvMhւkx f]Z$lJCiT"",N9 bQKQiuSDFi%rXl͚Qur1H2B2Ǩe'~"k-H,t:yKI,Fa'b":cnCT.uݲ MnmH2%˘,M$N"p+ w*,0 `2BE TAZ(oV'G 칞HV*fX$1Gt~y{켵HpBU*}T<@j6-q@|~:fje;!TKw},bfrk(z]F3c]h]R8Z|xRn۷8SbeW?<0_%@ E0ɋ(.zwL6|^1?]?>b97Zߙ1%sN.N%9\k.(f?~x[) \0A:jP7K}B=0 '7^GggϘ2W[گWan ˽XRxVRBSX9Ғ]7; ݻzRFo{Uh%Ysm(o}xgo]r09JbуQ_?[(%=v_kZ{Twďg:ͳJ偪﩯%^][d)i=>8̵v7dQUѿs%QooWE9B6~iUJ:OⵒheH |kQ^π5~Q4G|E}U4/iAqb*Q)TJq8dfGgz裈QcgJ*NPħvD| ^#[&>-%SސO<V7/ҽTbʂb "(B؂J%㺠b^>QqRSyImSf`XfJfx#I{2 +m<볚Qr<񘧧ERfS\W.z{F]#QTqƓ4ISڭXgYG-pAZ(.hj#i^S7kJJ0> desTCٻ6$W,fg;3 a`{F} y6EjHF)xUCntÖ2+3"iscy{'*ud" 3OUN1.Ơ <2攷WE=n%!WAPE@E0)=eD 4՜3FP"xu 0uwP7tG#xEks$FH,tATQM+"Dd` ādQTwwke_ f0޿˱MAP^}>~af_ƻ5I1s)hCK3sBYDKx@4U)f2p՘5G(ž@:[O9sʅcJ,:(i+"_mYDq}n{ߢANGۧ_S>{YrH(7Tt?[FHvv(K;=84N Ώւ٫c(5_#+C@!6|ͧtb~j(,K$U8+;.nh>-V/I>|mj?hXvJJ*),jC4 9E~#ϭ9&D+LYb+o2N0]c( 鹝y%Vnhj`hUfuR#2&ǥS1>udjgƔh8.J7骥՞b6Nu\hGi^3@eHARs/dvڇ$UWHb$gĢ}=%8[SWbvOE@ /Z.o$UX1X\2@Nk)B@ 3}{sr*]i՛$ɺ׾0v饝tz$ i*s9acxFjKfC!#JU{={$GR GR!ˍO;bAɧ]~TUCvYs#>j Jhb՜)Ep n/TA 96nzviwPв!zTh~V&UF~8M3!xKt+fd0Dl!Z\PpJ&Nk&`1554T>I^k\Es\Dޡɔihi!Gyv . LE\F-+}X)`j.`%ՉK 4 FlOF4W`J63AW]n%y1)ЀbJ4|+"%MX`~m&:45}阉h#PPߟC\qưN{.vL\R" B^@s7hJ@b})pz[U xQj(=Uכ1 } kӧE+n┧9Nnuxjէ `"?SpT@"TsC#1\kǐtHGE8}y|43=a}WkIx.rb&p!`DKt=w+egl22tvU< ˩*BL~,3N(nWVXécD J_sVM9n4^W5ʁo5 ;ۥӶ*zz%}6B6d z[vgˏtE%#¶bD>l|dD_eo ƤVk!{gi<{TZ6G7㫃zwn}1JӮ׊u蝟kϴWNgfzrٺY`8s6sϣrb=-3RJm^SWdzb#XJ!Шm惎3rꟙIxPGXRVzFgӞNi(3-QGhV))  .24FQSCYcp) ),'3w yT!3Ty3㟲1bpIt6)EB$-u*2TTјssʦ?eS$VvˋKhZ=#>Fv95E{~wiDOՓ<o\ӊCKv&~ԗBbTR'oNPooil@#)~{}q.#we¢\}dNM**Ak6`t>gvy6I`c$ʌǟ>1t|xɻߍPb;j?9E䅨 Q"Ȫ˼|**R>+5AVEES/SjܨRBVAMHn3] :+YceH4_ոKD]#Y7P PVcjQ쌔j%u#$p 4P'?Zhr- //8I. ,ulޒ±Q9 B۹&jŝ!q_V`m4:A6Rf}\2:ҹ?t.>PP{d0IF'eWƽIљܦkǗ3)tfϦ׃K'J\\^4 (h0M(C\,\[DpNX{[k5Y쯋{,݉߫נ(T&AL\f.Q6J-ȶƾrR*_)Z@Ιk9O9Ķ=ېo۶g/ՋX*q79'Mɖ#f/'?֛e#^VwlXΓ₴&AJF(*/78Mh2{^ S;ᤜh7/+vIeDI=(eV$=ޜVE*k6Aq&FB9C-(asNp$9,H=SgCǕC`ZjݡJN)4gζT8B h*6הMN$~Ŕ `*bˆ T/MӭɓJJ߅`& Fu#&1;8C|l}<ܘtU'jժb$|}ekr.Ռڑ+7d_>;'ONu9J=qٷL)CIx١ #qRЄW?Kpi}daM0xE0q."{ك#Z8"LcW!|gBRôdX$ui~jL]3iLU#;mt65]O#~+Ne>{o 3.߶.h8mt,6G@R5ij-}tr*rj9A--r6ͦ~gF3eZw7ӵY(ZF:5C>Sґ7()r.7yBƩ~O>tB.`ʚR:/}N?e\eI ьt4 CJen\KiȝI'Ia LۘjR`}pc'? SNfL3"fy@pO7OURޕ5#鿢=f}8gc#f;gv^uU=o.Bƅ2L48߸T"'8ڭӗkDV?[-E9C" vkB-CuG:=xՋnJȾ"GxBDDըR#ZE Vz 9:|)&Q-64R'I "S4E(_Ԕ<$pJU!dɟl9O]suVz=bk~9>o2{|W_~B_KW XɄǀ C.Yk)AmUE$O.#:_.5Tڻ*-&jyV.Ɩ{ӻwNb>{Y^ZfZkm+˼`y|&/5-l,-) &9V% דC~M*9焁Uo ,A{e[2alHcx&L֫ؤ1~;g9cnׅ傸* T $GPbDDKMok04͹2Ў:BM>2jc D,2Hqљ$D`;. l"YP&!)taH>vqw} ,%çjEA0tGzrJ+w٭Yך r"9hɘ-p H$AV„Es -9猅!N9vwc6?Zxz_̹^+~(u: Ŋݦ7`=yn 0jUm-Kv"<{XC/~{;|\ U-h/)1{ɗ7tf5nn&O7G]f-6G+Xa)7c9* ~s5,훕kl0 }c WIHfXʌ5G& 3F"2"a{,@5& Zŀ ]f"VPTe P v`k ~QH3s;h gT3ԃ*&}`iH87Ra"fR#X6kZEB}<>D ;^2aS 7m;w#Է-Xݔ"?trXqS"}`ge+X^in2= U$>bvFN)p^\F-E [>9p4_L ˲9 N)lO[%I!4Eu 7eJI)KKq8.s\2h_EKꅪ<+[l?R5|wnra- T;Od:#HSK>X-S"Zv^(YyGݩ[P͖U"#/0LEF""R"K.sC3BPr_|V,N/l WT Hh*9|(`J`ks"Mɺ CObc 2]+EsQ\М +?PXrarƸtw^:ҧNeX&- pI3a)P-Qܰ*942iq9֞8WoV )V&S,KД\_nXJ Q"Ǣԝ8Oggx=Nfo\Oonne[yP;DC0o+&`TalΔZ fZ4|~Ug!%媫όsIJЪ qw4DKoBj&PVK b%\4sHBzkg߀ aHT,Cӯʃ &ḏ^ު% %_1b4ɉY+]{G*JV% ~UIh<#A; &D2߰PRsoS:..x*w (\{48lm[Kf ݝ(YD:9Q>YoP:Yh;QlT &$_ڄJB7P;!EK;S sg~I_:-j7n=~]h?=,Υ~e<=&O+V~al yVkױdo[2 h~\Yhi,>u  7@|>3S9/4 ŮwH:uNO<5 wn9_!J#:S frBuz$Xb5xY,3}-6`HŃ¥F]vo!IdǵM} }mzxeѨ?h}0*ٜ-9:[:4؂G=XoN-ziXHri| +^>h}0 SW˔sEUp[u0huh9 o4* pE^I|?p0UCX#kǞ *54.3'|;gyYD4ke쪀_'FuEt~{h>,* r Zzy4;>r}C4wSm^$oONnccnss9 UjP.zӧ4:gdiDAM>)$HtND]5>>7YޓІ{ZŸ9]=.t"DlOXy[:'O:'X=OH]a0vzgoIJ Lmrn$:6 ) W{4WC "OAx$ǟx-*2*摧S#x$s)0Zŗx' bfjtOPlKQN AҖV[V%kɮ3/>Zj͊*-u-{.S5r=*o3b+Ԓ1s.EVe( 3e4![&5K˥T@ɡol @iݫBawz `~ @LZ1Eb&Dyf$GEfVJ52gJgP*,ZRᦊ<+RLP"#D^4R\ȋg ]0OGB{Q\dI :E#d$x}Hvh;OZA-܋dDy*t1/Z< ^HЏ"!c 95:<1ϗ'vm)C]x?5fCK"3j.x:y-+c7Y118Ou,k$Okt#y R z/<#vڿ-mc? J\;c89X45; z-zT9:U1m8 9a[D1'"6f{mNԮ^ȳ+v1fm 'cgg*:c}'vh,a 9kJQh<;e]Xs4u3Z?axԐ;]!'tю<^+.W7X@}0jA1SCe|\L× ˙sܥ;U 77k7}X$L`sLrnANC1pXn&nu0 ݄^̓]Tٟg`:NGljs?? #"ҧTtD[IOb.kT.V׍\ȔmC-|C&nNhM[׎Sџv vBB\DdJT?n!hP |D'&ڭGN%QNckj.$E4H$|Bq-sdBDK8nn]HȕhLxPI<4/ 죑vKLqX-tڭ r )v_Iv GtBh&NMOWu!!W.A2Yj7[(>G099pL'ݺ+ d 䃈 GtBh1N%MOWu!!W.A2δ Aܙb#:}4n R [D}[r""S9nl[(>G햰(p*io-xڭ r YPU8Dc\J>` 6@1]. pԔ?{Ƒl|R~ =dm%@668p~ń" >8W=!)YCrHe3Wwme.ũ]GfkcG5=M|O z=.WrhbAX:4AkcA6F-M@7P& lZ(` \&HOcV]1hGů'CƒGoOcVN>NR k0I|+ɀjـ _&h~-hUހ _&RaՓ Հ _&&O7kB`R؏Fހ _&hP'k94`ר (Sœد<&4`LIvr`Lo:55GKQv1fMv `̵jV0flcn0:51vz `̵jY{DY3McW43'1kN4s1ש qr撱cn0Z5A̍i0cU Ø}1׫ 10FM0ڈ۳ٴ/n7 (B? 7ȼN9qq>NCifA Ϳ_SMJhwqGB\}H`J+*%n079 K7|uBo>Eiu녝ZxoJ$/ ^L}m>$4T] $޷ 5FvkvK\εD-~$4۸O:0&қ6raE$H_X eƅ`5' :}R"ڜH4s$Kr@g9#!;3lPBB8*A9yt{'!ΛQR###ɢ"_`JD:H., edOgV&D#Y".(G̰HD@SHp}dE\N" #3X)P%X h|iGʍ2FL.)(t7J,ŖI(+Pvp :K$D(pbJ5:e!\I듛lY q/ G㡋ZY56l& ݖ4n9xѣMɎހ,lig$tx: y]lQ߿iB)ԕ/勛[&Eϑx髟zNmBͿy+{ta4_E oFyNN7U.l0[!Kp\޾]_]P\@ZsDe6dhOt~-=סq5me+dsf2gY c w iPJX5Ip9RjE? (N)|l{o4ZHV!F˙m{E1~V vr ¿P^y*agLq\Bqd(W(WUNž4sr'SU]c jE(&Q@m1j1'q4q#|`l}+IW!gi nuE Im3!^.=ԡԯlE4C(6r7 2͋btԷ> [Ϙo=]} 2ztV-ipa`:L)nrng:y?E W1,U=~Qk\SݓkB-@; ?\=ky)~z0z42^_ Ev-_,8*}|41̫P5{oW6r.oQ˩Rm[,C>=}TSDJ頻OCb\uM?Co\76d>^'^3kM6]cTx˛7^|p! D05JZIYsEz& f92'͡4YMEUʣ{XcrPhفok;_F /]sr_4f plgWfkK+fӛM-~v ڡ$',fI'5wҸ[E$+i.)G3K&! ohwowoenvE+JFU]FwT!h*->zcJj%x (Q*x _1r h((EgTh'K%Ȑ{buT=i#T͆dHSmҤM14C'1h O(g*EL*]ϔk4"XdYIUJ{d7FJ7n2*ݲQ)&\{6s!,I7 FlGlM͗D}z؜T#6'TMS5֮W) - $W[9HiѨץ*G6툑i+uRb@SN؈5:pW*¶ }򟣰Muaa%8ʢ4GMSV#UE}+` P]<Nm0kzRQwdPG2oSEM5aAhh&tn"4`DbݒS:7gyhz<38@  ͽFDB jB.'lV$ߣkɫ[%_ZACYCl=D1vO>k*! C7GS!(g`=J'ITF'P \Rm{64ueHwo!Hi;|=~Ρr,hZS:ٿvWٻ`=m}_~>4LIuH<飳a|KQzbj{X|yt~_X!>9^7!QA;qc//bF~e}zi)b?z{P<0qe>a"_̱m[QTIN7$߬)xyH4Niʼ×6===&'4|UC9͛]s!(jEr( )7mP]ih :8{nGEOvMǰt)gRiRMJ \+c.e@ō6R-NCga+ݰ|Pk& =4x*dL 6IKa[W/ӪJHm6g:>B1֜!*S"eTo(Fs p$抾vMԄbBה2Nls)^m R m~z}ju^Ѭ\(Y^OLOVp-KܾC>Tsz)EbjWo^G܈M&wk4y&EH=G{_#qARc?֡~ctz8XpnPLg9t5kЄvyQQ?vt߅|QJ-uݰ8ˈ$yx; Ќ0{kq=^~>lq0+&z4@rJۆ(h>nV0c(v_݁]b7]#95;$K{༃?kxo<BpT7(>V}7 c^=ҫ"U,HJg ;͵SعW[sؑ6 C9'3xĚbx=ڥ:,6lpmzq& $+0򊡼 :RB9|~_.Mtz}CN̝q|כL /Ӡ~Y:<; Φ s'NƷS.>jsA*AF'[tϝye*> R\Y#%5Hg8 !4%@&P!|D؂wa hA,@c K(g#Fok UZ?{ȭl˼_ "%ly%y9[lIvKjn( %+bQKJ"nĴR%$.7H6'F0YBۚ6)&FGk),6,f7n^pP ӊ΍FNfGa'z:X(&V{Ufg+%99{q0`R֣h#4ኟb3ڤl^ck)I~)23%Z98no BvF,]}o 2&LDζ~ Dhs;[ʉߖBxk5sD3^DReL~>4+R9eJFa"Owq\EPYaƎbΨײ6qfnG?qɼ>fvyͲvɅ5:ty0/59Xz:_ AZX=yCm6V 39-g1ӑ+tNtu^gIj}",)Ng# ضm6}ҟc!roڲʞ>^͜yWCӜYjnyҦaSeX}6ivwt~Ŭ6Tȸ s:M/D]6- W穣Ay{f7`(F@;G1jDQ\=OGvhzs7Aiu[-_P2v(K\& rzFOdqOJ99ʪ99N/0Tt|*Wt4J-8P R#N?rJ2parD#q5}C4EJ_Y3DN~8XX>1ȗǘ = t=>caԫݠ~_DX1g~s!z($/(e_ѽ. (RCήaX=|dQS +X001JNJm=}lSt@+EZo{Fỵ̾,kS'M}Z|1Wmk&IVuXK˹VX]qmk$WJ-lxizM,#כ5sk 6eg+ck^UOu^dMʍ-ة̓@d5BTdNpVdZ23rzs;4Xgӕ~Jw}y-W$*ﳫWsBNx=fI$C,%`B E/ TiD'f6FxCj:xQžo{XŏR/=}~ b';J5~ Z#8b!_?G xm7a咞G1gWY9( EeWeKdE*O8`,oX|XBλ YvwZ8D}0ײq4hΆ싍Q`,z&{bc&(Ty?AgEJu}1 4_Su*_mw]tZd~Q#"dLD#DPX߹zmL6fBW{-I A ~aoDmn>Us:멺l;O5O(&o&9er`(pbOBl Ah9xp HiF6H{+I݀,*gZI ϝvA'd\q&C`$JS2p!1;aP2E_FH!m;jҶ㧰`Z>|Q1V7L83 +rj @C1N8-hFbI.X0E .J3 L2)\X"& εm;jev:S)I{_=~{Ϫ!ὧ\B`Xbd UP9o DAgV=r ,o).]Tgb̯g=A0#$U`*ΠGhr^~n6ȝJ/myg#QY]5 33]Q`U<,[oHJKxG L4Y!+eoHrF67ֆoo_;L7î6B_.)p*0CpF99YO`LDJ8*(=Z 1Üx`LJ04V*+?3 * 1`papk$(#Akr@G-*];$ӧL_Dh>ο#$9:9_JCO1 qп+  XDД=NsP&.z0N!:R0V^QI%*m7@:"4ĶMEӣvP>bT-TQs5I>)M%}+ OvI" c +w.b"np0%F$7T­kS2$lpHZPu;Dԫ ciFLJ$8&Ӡ1,+F>J1+_wHs {_[UkCE=$ V]LYA8P"]":2p ÚCI(FA욽V+AuS~ wq!דr% 9ABϔ1m۔|YG;T1h1XMC_y>uYwrge[;8iT\=Q:$6\cj;nB80QڹR 0x8.jhs:ĭ#fz Q,NCGZH-a筽Gy@DcywFf.jRK-~}n(_RN YO+wnO!lf8"HAw6Km/"Mo'+?dW_$L4ⓙ]LnŪ4r{x !ZSNws&B` L1pV;LRS:nB4!G)%y;#AX#N(t&8葤v͒&umKDr/}dZWg~%H$3%#%s/ɳ x y*+{'b={5x(e |U"p 5|kp>eڪ_~ +V-YC|aqx3.,^,//.0#KRby, ~.Og!>!aiيi,~dd%^ҲЕPp @N@-d_i]B+)Ig,QNGQJ؟aLM.゠$\ʎ޳:%D"XF)1c6H-@r$`. /ڻ(o^qZMļ"/}z&#:O(j-DTԪ"H 0ԊHrQeO,$t^[YքXMzgȝ>P8mC 7.M[+e}XcA i&tDf[qP!q;U10v]DŚ!ͪ}e&D5&hm|9&5q@wRnf ̏Z"+'ӫeJܛ< LcN;;sA6S\!}:z 1{{u9Lx.Ù[M?y7 G{_$TM_m/`H=Ygm~>^2y׳ҋ/lzx{H{/mDohx; /<RоoponPۡY_w;tW/}D,JX5{iy9F䓛WNN\!L46ÆgZ"b5|? LId{eGo2 _[NĶZ⯊"Yub}0cR::jGWyd\P/ aINa(),Nv L_b*cFp%(L!9*GiS&Xt}7$pb @/5#-*uUWnd1|ZV?Go6p݌.Ip%~OSn2h{WRPj?* MQP&tw~ byFHa Uj3>Sv&*2&*j<iJT挬鏵ςMs[QHO!JAoeE $m=J9#WC.R5S@6i~lTlSS<; ɝ =ԙ-`r_f?#f/lcG^aS Nd "S: ֜/?VIm&,6YV17iJbt(,t&/uݲ#o5PiMVEYllrvZ%Яh}Muӎ=H-jHy<>MM'u:{cJ'} Y#W724nq2'%,NeHW[1L+dD+S5 Vq'4nY.p"uFpG62z}Tp̻K=OLAM<>؎w|Ŭ=ՙ\Z (gc٧E#%efڥ:(~rce۞Wq7R{QL&R-=2G撫2flѪ("KZ7f4kӔlZߜ$zjM34:ZFnxǜӢ# M7< 1U˝-^u+R.aXcTOw7fpFc #NV'SJJ] IR&`NU:P֚ЬIe#bu*RĖV&l7Vd].\$=R+u)Qx6 C%,%4IS&KPzD>J15 bwWioE07:H˄X(~x7U|/*=-[zXp]JT(%A9,Ê:+zw X㔩R"?抜IȫIT!-{O?ƗZ`A^̮CL3 "!lIJ r/EڒPC I'TKKz弤k;BdTaQv1YS|$زT X9 -yH#QRNCo`exL`[fi?ނ5IcaMfK?H؏0UmO6'?5Z<ۏ;'׼}?yȒ}i*ml-.Oo]14=HBk6w)pJ*M}!"zz䞊б-|or*fW2HG?Oj/AD$'ooFdX|}=O9 (o ^Mbx8Zzxk8Nǒ(0,BqK+|`0qXDUmƈa±,}s1E8ݦqA z&1Jl,!J ʧAF aYk HY:ˆH=ˆaJKYG9V Gp-klI0D%iIW:5sW!L j aRGaR]^0*Gm̍ ~ LI/ B0F$O OK)}mJZ,rf$&oDXy8eDZRc DW"~S_MzL+ G'iȖmXMxF˳grRU!9*oW7;p}^3[LDV.! G1߀j^pSR3&j+q_36՚|JN c+&C". d7¤P\Z&,' 8p5ZRV3$+tSiR]4V)&22cUƉWJ焖6\^0 ozIu8-!;AP3AIK4||RwTu&exUtXSR Kv߄lpM2 ү#"Yuuo@,#w3.S²Ld_7b}csi0sY9V\ic:}?Oy1B&7Z„"I*40t&ܶl.JGb>|%#TTQxR0G!NhͰd=j>-bPW/<^GŬx*~ "oqouȗ=!gC\tG !"pj '4qeR4U)n@0NJtl'LTUvSrnw@Gj~,7;}r~D8E\w.ezӼ_J -S,tDK+:/z%xwj2-~L ywq!|qq;Y$E2KE$) J9eݺ(3a;߁ T%8皪'#0Y)ʼnLҚiLbCd|7%Bg=ZV::1ʲYg42 A4ƙ#R׼+%Ō{C-@w +RX2eCb| 1 ! >U:CѮ،z1hʆӰBZa}m6JR$_qv:~_GTu_C䞁.}Q_%TUL |_sPr {@+yQo8N]d.)7d12C!'څvH-U'!\LQxny0CwdІLRu\w8EPbAZ6-$:l7qa8rsNpgi%;#՞9l˜y;36 yØqBUT4u,Gw~ltF\w5h4{Xw5,ffgn2{E~݂> Bц jUWo<YNOAChk>}Ӈ"J)aq%xì0G~|*4VJ̺zjtOUwKz7]]Ⱦ|^klJרNLJ)VV op"͕Rk+'FoKdƋ_4 fy~_:!Թ munˡj;<$@Qg9Xfg(܅w\)ΰ"ӆS΋ku!vjQѥm m -?b Z2}wɳ*pH)JiRTC@ɧ=y *Ҩ1 exi IάWmXa߆~[^e;q;#1%yD tHBe:qкXk%wZ CL53M_jA![N XIDmF$}(CrTqNeroqjBp4,$E0_$N,vbHI[wg=yHܔ 隉aAó $ȉǻWCBH,PvMȦ*DaPOa+MoAM vTM`ЈVrL5Cs|I]odGnW$H:q`aǛ/W׌vgYaZSR/[~d(h!]9:+P$0 ME4ݯX;HULYЃIΔMbϬU4.I+1k=dcmlT<8m^G=ķ[-}(:`/J6t-!9ׯAkm ;h<-yZD-`KO0J.Ab5<&!e@ 8@Eu} lIbg3> 4:H IHTH4ĐQcJl/@ay"#r_e$C6j W`0ʹR l*4oWRP.GGA(gc&$:RH0ѱMc艭 c䋔ǻq0CĈ?5t4(@dE Y`j&dYC9rcF#Mξ4 >Uѕ-Z|ݕq(xu2RF .{lJJa ̈xɬU&jIGE.4eUD bC'JNjUEve ["1S Nk28Yd%!/3Z $r5c}kA9r0 B@ I|֖q-be,J z$(J YK ɲϚd;I8+uv% <+ezuV䞰 P½Wj0bTY|)a%Em@I^Ő10%r$j3+;$`(U x /Qc24vV&T֬l#(Ö#J;!ϖU6&#l1lMvADvvV ҅4o`%oR!Vgp9+JTlnp0hqKݕ!;ijieb6D$nz[aNē2@|`MQïJ]vxg~}7Z͢vV5n Tݝ'LaY t[jL7Eߖ?O0b՟'R0D-1*,Yhzᄺz񢘠=j.3 ˋ;`KR0lV&PEIszo~&X8܌b3B I8򎻒lxj] T6|Q/һ]/N3/Utߖ}—.G >z9ljޟ]]_ձtu7؛K7ןozl\ ˿A:y&%diN޼⺿YU{ovIL=긕 N60$ :F!x'%__Rn匳Nr4֍䕻{KT$jdpb,X DDcHyUy2(EF1J$i^=q!Bi#7̯̈́'ْSð1^]Ge&RCwų& b2TMN?63 Na-I -xoY`[j^o(iUP‘4rGaڑGxRGMIu6B3ћ>SORBYڑ#["Ov ;pZti E7uCdYөTQ%&IhĆ`ViR,> }ѠgC_"-]FqXc1-Rs0 ɺPY%2:8Nz7ڃo6S8єrQ! 4s`(6}PD)bT*LUɕjjW*Kmi6.O^\(>-*VVSs ?=] gJT.޽};۵~XW? :ݘBͥUXBthƌԽj᠔z\jk0h Ki)/j#VDumd$Y.W&ŜJRաj) h73tűZnsh\/fåүS۰"`5յ~Mʶk pbt[ ʗ<<7zr^c;J\^[9.})S=nOVo}TJuiVONj -^jy9[.#*¹typ{vW4>۝N|qEٱu s(-1ͻ _/>[˂ޕNjtI"i=e[wt.??!5ߝXX VzJߧ+o#I.eȶN[YO'u[ԣ_z$4îHuY1nCW {-syœxv35_ySs|Ǫ8Co?+Ws8{vWJg+xP?epDsp{k8'pPokƾHC0Ja%½}OMrw=f lƆy=Zv3C )L3+[DXL27TmW?_qE>[$=g _~{\LX%aXB/Q'sRbTKȩ : Q׉R1]S]-CU0Um"*ޑTɧ]- LEOk-}Xydއb]TK^_.n>E7kZ|P]i$W߳[q;eQ n`,R\߇C-W6B:>=t 'mWWg?-=-aK%\:Ēuǒ/a%e KP0Ł>鳈 li}e̝Z;dʈK`z+&6W<0eCZMkYb:,`H& _kṄ옭zrAdf?0ns0-ZYK`+tTl >HYy!}㕰 aXbцr29ۑ6rQ3ef[ } ji￷Ȍwr'Uԡξw=vzI{#[lu2Ӯl&Q ׄ)Уi#v;-ȼ;O:KS^ΨܕܕsNZx\mү#]PB M IFٌ1CmdYZAxjmyaT(p\*פ`wR =CIzV(dpAdQ&{dAc='*NFRr$ @kn$j.52΃vZZ"jd( L24!2X+98Z2˴Ā=ŌM"Ah%^Nf%}<4vF /\/6@ab$H&, 4:E+00 @B`V474P cϻ?+̶ieʱvHe"딥 [n8 ˊ1bX+ uo)y i8)>p{Y`ᙄE*}[Fc&.S;̳0hj"=zrc<[=<17s~UZŦ-xrxN*e+.څ/U17 UW'ΔùEj=_yr59K_[V02Mnh:qĵNjN@NK-P7ȔYBa+Mfa>e+U03'z;A|!g?Ƙmwv+s)3eX<&@k IVlXj28mEቝ8ѵݭ{,U='}^"kIQr1GFPi<9)b*Aऒ(CP,*7i:֢|y֕Yv8IPICBNi[dz-Tn*[͗Me[walݘĖ#9 71xN%#obL{~-޺ GNgoM?Ȃwbu 3wjk'GL.w\\KE 9cvZVH؝5,/o{rv^~韮Vreq޵4ǭ+翢:$ xTMnJR7uI_#ǩ#шA/c.rUZ[-t*&kAZ9Ѷș!j>p-嗼.aW~9Z/?Ўi%T&zsM9ӿ׸ 2Q7~ipD]4U5bZ֎kgz`w#F`P @!eN8,eGc&M~Z^O_RC"֯i3 vO vtS Nc@(' N3p !p'"N0ia ^` ƎJ+D;RkH/<mFZG ]\s[$ $6+0ξ/֟?~P[Ҧfs%vqKbg ؔ@V-J| 5܏|ur/Frkc\۪ۙhlАJr2"ؼ4Xs JZxy$lh2/,fY)OpAcXч*1K亗8wD[':~%D2!:˸ii)oxTZvOZm*Uäh-"EP[ʃ{,c)u31F9;FwxNgr9s}<" -"7wñT(h؛trzյ~(+UJBX`8 3ǢDE9!GHS2cX0͍jCa6QKN?o۩TF0wgp$F31`KS EHO4CO-LM?2\3/yFϟ~[ _V>g<D)b\fi=$'\Z1r w)/sEBRV;v>8Oc*Gl8^] D^9s"3ܫUT8e]s8E8=vZE2U?O PdjuVAK x6K;By0=#kIwMq{t=]ufdd?'ۗj~yKA _^-O >L|7e/Mn?zֆj֫5kru+kv Za0^#8]6ax~ԬFZqF+C ;T{kKuK0Z*BUph,EAֆyCVAct[(^ Rgƀ\\ʧۘ `bj KJ]WZ'jМU{m.RZVRkEf Ե&=/ݯ3LJr Q^f|UWH@Jtk1q^MpO_̝]~xɨ&(ckeӣiw0|ŀ1A Ir8A+0Sa))$e+%f%/ղ^I(=.a̵KcLJdk)xHZ44DhN8%-8:!HIbŦFh *oQ<ۢ5H7'7lgAhL+[Zr꩹`HJ1/7.VDCmsJ5`t9`@h8ρ4/ukkݪ?%L`$𼺩E (՝ Mg>>P-e\VnDU[j$FrWpKO; 0j7DZH-5 2*%^P{~zyu@j@]!ֆcC(=0 +0Ci{:} H.iS=|@qڶgC•f]`Y0w Ҳ2u} t1BY ~5H%77ƪS +n<8㩠MαJ$@'kͱCԡQZp]T kEUybkdTxS 90Ɔqv}I \i6]ȫOL؊T 3Jj5AE*r` #4~ƀ p\5  ̋"BB4vU}PJiԵ&)z]b{m<4Tjux:ݝ3yDSScOon.^HWXg5wb͕5qR;_m@ZS-/s1] plЎ,/Vx+\I+brg6;6͜?|'X*0\""^r.kBK!8}Go|9Kјٶ5k3HŰ&?o*VA%~1^ao-:I:}}U .Rl-J{9 adyp^'t]c@͎|9,.xe Ke/Z>]WK&/b]1Tf>^ɈDFHVKtoF@-Pp_o֗16{{L`g^6]'{^b~4Q})9E<>FaAZ5*'XM~V8xѡybKZE/F 9 HW)l-Wvhdg|WsGJ3bi>'zv904x%=њS5ƀKHg4Vw&|U4ƊHl%Uj jAl9ȁ1^X B;r d,Ti!%ct^\GTVUkX^B-iS Rir` 0s[_Hj{>ry~7\pt⭼;Կo{HóґKȮ{/#~ɫ;x6?Zo,kK:_?͒֟ߎNz<:Y]=tǟgG{j8Iw ;ʵfQMm<#0Pb2+`:w啥+1 ̌qW'^ElbgkNnXZ  90Ɔ1ZLaJan*OT4eWRRкn1wn=7d붘ч0L h[T\><]hy##񍇇̽'-',e>m[I7]^2@3w? @Q֏a8ʼbN/zt+v;| :'[EdoCWҁrk|;elϩRq] |J(Ւny3꼞g{HW#:gd^]6Y$phY lEzl12㱁 x3S]UwYMµ?rm[NqޑO50UxݬVMRBZ7Le&2{MVP2zv+|amtdZ{ 3>SI3M:" jm}$I'cl_"8 k3DFl\%WH889| { QҵWK*rʣf-{qRRGN{leBRF V伎FǻS #PʽW5ZFd8&dDbohcG+7N7tXȮfL [t k)/;t_qN7 -cޱ*.bJ1 Q+ }a{fh> MVK Ȫ4Hr؏%X2Ђp5Я< o$qוcڗI ^2vMυo,x["siIj&\՘ʮKJ)KBj-KUVtVy՘Z*4QzcFVeW8_]ROL( ۘJS L^liYf NMv鰛pyٳHcj Ʊc5&1XEi@< ~ T p{#ADamidim&:fvWxb^dMW6nlb%9[h 'fktJ~GG7n|nc a7&jgX'# sv-чagFgGE>(cR,߷ȺD(Yc:б^cD܇Ȫ(gy `uD8&;aNYb&+lӆlR pFp9o]";LHQ莎S0zYL9ژ<1AHC>P/t90Up1GGӆϯUpZNA%r*@*y)ã[NE HڇEi /( ZK5}Jo.]KE udqNdAZIUtH"%pƢ)%ӻA@*,KRH* _XSmZ6{hǡE&}ǃjQ<@D `P#]9*~HD=YF\q6k%h}8]*fi)q4" ƹŠ3 D:kDǎuvpB*8w!:G9<5RФ)A(tJD yӆ_8OQ͹tsلS%%%ubMW -j7|YOY ;g)윥sV;Wp@m8f7)X  )rS%܄Z_ /w 5aœp u mBEȁJJfyvߍ7RV{^Z1\tG+PfI_X^8$W)@Lw1_)/lrn7wF͔*;&Z]1;Ap y'2$˱x֖  F _TA#X;%ED]klPXc1Y?w?&r9[N)<AŔxN(\1ߢ)HY6H -89EA`̰5&xm@jdzʂڴFڴǸZY 3-l:(Aso@d9[!Ԁ@Lh&g 9HǡS D2 ޑd 4/k_4דEa&OX>3s/4W[GpA_D` `)b 4ՠ  t|(XB gJ?û?O^wuA[KCJ ʲ` ̞ `i+0\1]x7#0)?Bo_l^0\2'g/3+ضqsp{_{>0m?nz{^nlnmllܚCu:{[ۻ;7lugjww8Mwn4lk> o,#cx川|8p'g=[Efpl^?yq%J"I }k3tez pˇ74g75Cb3y5qfq% Ja ,Qf3$0ja<*ݱ~pf&O"Q׭O'0T-O;uNX. 2h.Rýa~2Rc2ƢO!_ *ᨽ~8+{!打_ >>;HGEMzr ]s~~|Tcltyf}/$c@3Ixl)+8|o1WH|η`{ެo~7{YoN;{RWSg _^{S9K'~' /UFxЙXj+YrFaMX?>b>t] ىv/Tg C 1Z׸3q3e̡a %\>Qt亂II6Pf*]x:§q"[h/l B"KS@u#P(C@_k[ M@a ^+z׊@ xg)}_5=_*է_;(`-AˀX4l0*iR;3qrEK҈H EW4h&BDi,fBh浶cit.V4mQi Dc6_4 ߤqYl-"䢳1:X$;CX{A!Z_*ܯrVZU{ZϫJ{X!y^.*=1??9R `9)V!([dQ;]`' 3R#7CnUo0 \RF} n q#qyb`o- [~?ݢ`gq|?/6Iج[aa3oFiZ*G&]d sG8 -|SrI 7; iUUhƧxWCrCKBb`w&)0E8쩇6vvꂻ0 3!ҒYS'+w0kq^8X]s8"/[E0f3 a>Xa# @$`!*})wW-VUG=#E9,ygi;KsYyРN`8w.׬B@&pu( jVr J(6,Tpa6b  c'nWpWS+)y]Rh$2%:3XpL IDK8;b&~اv9[%6ۄJv?3smǼdjބ} ogqm +BR ҫ2@On}ՁTS֛3!o ^p x)D/(UתѺN8 ^tA4\tJ22ͤ3UHgw<`Asa:蔑[y$$'!=S1u,S_3]MRϕ3hTp˼1'I -҂Y %eAJaY6]&J &D+Õ@TXcļQu\zBz~-("ȓEׯ_z*RCb)%ji>,R>, P M /Z\g5NڐIFRd80 h+Uc#"# Zy@PR^ U&˨DSL"8Y#rb&3J{eXW:2ːyzj*=B`*5&TAcT@3 0D+ђ\5DIeG~"$`K1;]\Q "DY,6BeH(A%VF %9: oc* Rx(HnVQB5#Ħ (ḽZJyI,=R,=iiFUx;Oٜ+?)&TcɂRI˜N:Es\ع=: /{֑"`6HsVH`d ,K֕ o5eII[#fcI$ROTO( yq: .Od}TO$_LD1#Jx-%Lfr,K(V؈9qzRM}KMw.(%oL(8+|v6d\cN"k/QZi4bbX NtiDN"UIf]CFwpϿdgb?5ҥZM=1{5u,GJubBa!@/>{g K6\.%|=bbMĪ`Ǐd"꺀O8OwYFֽj]uXOX/Z.f` g:uҀ%e]֣e܏ͳܵה.%R#;|S#۳rOoFBٹהȻ 4lލid,VrWz.9n\~Mp礵|v(sMvmw;CQֹVܲ"%vL(U@- ]bDXl$Lus}QvO~ ķvL ôF-*2/'IuF@Hfn(P)ƻVd^tlOrOir(,k4kzZtJV@r1𑹿⏣|vP, 5\ &O?j'Ƨw ).Y v>D3!>O*ۮƈgK/Q`[ZUUO;SGnI(vIeci97MK=PI*+"ړ=q \=o`l_R#R龿Ik\ScO5Oˮ.bZ)|bE#}/>-ct-mnk?I6}>2)M=s12 o`ŻPx'!kL֙`-/$W Rn_hu[s}jI'nkۚ溭.ank.fZÈvŻ]t49]=8_nwx'$>QKVmᒂϝ><ݷ^äX1XIsSE&)}z61^1K1Nc8q8SdE V\Yl.I+ V$+11I]E\`R*Ft%V Lq׾Ӫ*OU s*o>DJDJenR_3,B2Δuyr-,q^xAȝ(L8e`.rOk`;j`JkK4I}~q8D\-V9%цb(Yj?mɄEsYW)FBRĔ囙V) `F9P6mvʟ3(gCҖycc| s:%L1p˲2X՞ .Qv{Jٱ]Ce.DܓVWJ%mm|tZ2##Ť, Qkx@QӣJ>{0z Y4j jOhPuP" U6TmPu}:O蒂N!94,7Z/Da>OWpt1e$7Ǫ E-_BɸjOKбͫC(*Zvl$Ppc`Ǒ9uDpx;D_:!]l‰D4B6}Ej}7W6^mxu}:mP."䴃ڱo/y<.dVFT0&L P]tBAe.r/rnڗYYMb^3IalPHoiNeJғt:0)\#]^D'}I++Lsh-Z 'gmApFZZgczȡe j둜֖lr S& ;X cROAi:6t0T,=;fȭP v&321A"(y~<JکzBdU"N@r Jo!MDl}YvntƳgϮgS7Pb,C&C̊$͂/[.Yǯe{hV9e+6eW)MR)&kbgR&(h*SBVO?igKoO#F>[H>/iQFp΍?&e~+W K9?mx&ȿ'„(:Ȱc٨矅e.ZO iz?yכɴs_st|8wzq D\5_}7֫OM}̎@I*bV?.)X qQWD]%X-V UhpAeE?&tłnCd .^\d b뜊(007[sajǪbxaQJrT Y2"y}ZFmdR:$d<(CM/N){WK7c3wR%>yq9w9!i]DE4^ybў UB$uev=Lǡ.[E w uoՋ;,Bl_~/W/ n|?8_sy_"׿_/_Nu!{Q_>_ _1" kQ?}ﷱƿR<'xN~礒qz??w:?9蜋'n{Ό?>mfIs?;X>1."&]_]_}CP2ĤFϔd,ȋ" On ,a!S=;=-AAj']R{L+m[Hȕ}yT%n$^}>|olMCAK_ R([ӥ/g6u T^\t ~ :ӑ?6tJE볛kRI פaƠLkt<}ZB}^ j0/2y齎T~xihhv=!B?2}Ip<- 9:e3 IsM $l%{"(-ٗcK]D,͕4GTL)W5O`[nBD / A"rY'pBq%j(YxVW V+(}8#AtwׇuFz+"Won|yx4 {eL*Hi2KO޵q_iu F,$K,l|nI]RfVW< K]d y v K9@&wb6\ἅrǪ+6l&<ɸ6bwE!Mge=E̶H>p R\K\hfmsK Bk`6CNHKb\H:!i2~pz6_ն|uS`Y7 REW2U#qPͭbZ ([>m4%sT =unoz>s7GBD)=#؜ƫyV#{o0=W?zl |h˷< X (,p`^^<>mz !\?{"8긜j(Xrq&i&NDƯu&}G$oD2EOf="ou;7hd0'u~!"=Y /޶7좺s/z~Xlr"ux}ku'7&\7b`6z)qe|5Nr#x 4?`bЯ5l&m!U]5z7:#F<_b)nI|OjM?L L} fRH| &! svtf25ri $=#ga&\3t}LjE_HX,͌IbK`K{ErvPS+&po RwO8ŕ$1&kJ$ɤ23 ,.->C(xN{Ytֳi9kj:TsJKPF٦by0x|4\U177LJmaiw˔`dL&H 8sZX0wg5 ^&/ Pٿa=ĕڤ)DIS"S5%IpZK .TMJyzF#b ~;q*1kV`|9K&moJq.KywFxj:#BHN1 8S:GOTvq \H'\</&PHx@l:BG3G/]d`%*OzKC$Q/a&2p/e-SMYW؋F3WYȥ:cQyXb;UjLVs\Mr0RQtgAFcmv D t.NTez7ӆcsLSvڀP)/F2+x;SK)Y7@I,?fۄ=0H &; 5_F{ >C*vt@/a% F5t_R^= KoNd.46.-нT(#(j]iAD 9`xzYqQg5nZrzT~a5W0â,՜ZiTc"'B&`e0y/½F&bi5hrY`e!TP%-6uD3.'_RbVe鈗7>h2lkql@:w6Y.{;Q-^sĞ$(rEK{: X 7! n ׀6ƹMp F ">3DFڝԑrC{>bDz`҉QœMvTL+kV`C4&:3ɦ=Ea+0 ;B1yݹA"zFHN;ar.1dhɏE21^e+GOd?#`$Y4:g+gz[I8t2%x\n?C2C$>؃^>5ax4q4v&55*EsykP)0sAh`A : .9t>ܶ8}Ql ${+Q/o(͈)8pkXY"l%/X,<;0 ܠ'JQPnʳ;0y('x,3Ekb8<[5ssḖ`ׂrg;j;eՇwO%zS)r1NZW:ZvG$7̳7mİ`qW+̽b?=Bk ![}T i"@7R->bm G1Z`T\T1;Uۛ70YrH߅ ) /:i8⑤8Rk&a tXEfa[> >#Y.R#y".c;vv#Lɨq3e1ױzB*[)K w25 ^CX!AUow|E( d^4,_2y˦Ml' z`z&Q8̤QbG8YQ˶ :vF30`Bv*~gaFE⣖swR[TWQE% (_LN|XA?YJ0Si@.GWVsc-k,3Z?IxJ0vn'[ z;v5v 68Z( |k=k$S)Z!+w7Wȸ^6ݛT^@溑G^ǫϗ`7􋑔}Ornm~`ꧏ>^?/^\_ /|{_}U_WO/KyArƭ2`kOg(;.|Q/ٸd(ާ>).\M͝[[|*~r1)GJ E^}qg#߾ۇo03sB^n ͎zD{f+DsP^&ܶ_+<. 6N\!٨ {Ivn@QNb=A?0S]C;^#b27#쪯`&|ܿUɪIyl,7JtjZٺ6γ|sѳiJ}⚕+aGkᚣNY{boqKj)7$}\v56ɌQݡVC=&j<6"j{̚0#͕]ߜAt ݱZ\B!f/_kRѥB 2=6+XT XX!np, &8pggq)*FʦZO:"R=Wv` sM0#|X<˟/n\g5qK\ܨ1< Ӎ%=ߍxט_?љl`Oqt]uͭvD>&BUn2PH %5`|EpkJ 3l^Z< ab F*k+o=iĻFXCú>C$xʛVN}.^p$Q )|9v$²a)g \~!yrn8}KpD )!掐`c!$\Ej nɃx '*r[BFdr nC)cR3?+@&F܄2Y [sHS._gX E w00$d}Hwq$ L2\Kq]Xŋو‡J,1@\V߉Nk(2D-VtV{WPb%N5Uk+\CnI%~T#2 @ 4%>@h'*ΕfèvިCJD(|SӗvS=R0_[5<S&ӌKuB7?ՠɜxY'Kyɤ__s2>ژ]go7=zeh>l ~#eQj𩨱v*@٢sꙋҎaDguu!X4{t*' 'Rx7(S/3I<:\ޕqa_~ӹGol| thG0g ?fe~|3Ɵ^w} 2y ;VzToSK"2! SSMM^ץ1K{fȳC[RhOb)Nsԓg19]cMgJ9Csa 3FO_&uiIA94YOI= E֡t㙸ce$Ex>W\e.Ew- o\|<8(vA%'(Oٜ1ҴT {t"&kϫ\5@eMBfITbkDVE/B}'\J]?} 3A)YlA|9fzƜPr2t<:أ V00pkql|3_"c[B}3hzoc=7)H@/O9B{Te;+LOm :YBUXh)od$Iϯ{{g8Oг4 !B(HH(ʬ) SD6) `W,MIjaG?n ;j0d6~2uuz7Zդ OYmnIۿ,k~1^|)RrfHsPs#z[ ꬭ*Q %޾/؄Jm9񝷒#ETR!ѩ{0uX7%@k*6apV%W@&lDseΆd2˃E#hh?t@$k ˆr l"8BM$PRH3,c1:աb< 9|1%LϏ4Tp*p}H " ɌI$s?j)b~yf8}>JHî b [t !AxyAb+!X06C8|GqDZeI*l6"sUg+F^ʨCnwr!FAC^2 'BP.'nV/_hg>>-nQ ȏoZaMo?7(C^84ְ2 GGDhvȤ(bQz ׽oa=C&~@u8*[:iчLL\f3 WtADNGy )ߓjM2?5 ۑK{FmqU`[vc}G;xP*!}ύ_h ;1-߭DjHܭjtRVU^5maAm `Df&02g =]x]!=uշd^q!0]n.Ք!k~$0sk) +5;Ĉ/! OTvXv( vo!򬲈 쏗Et~Y-&8UYYTcGy!EApUcCY]EGDv+Eߣ.P:]1 j G t|@Cx+}7[sȒh*dHgnȚcMԐ+Ny)+͇ye&\oa͢! k1m.xX<3-qמ˜K҅+1i?@ WbD5`E\iRPШy;Ꮄ_qmCwN)QaoKL9O!sʼ+B 2/s|W&s͝H4i\gQƄAA 3270Sʼp|_wv) ՓT] !?Ɠ9UZk.xF˭/龛On~Lҗ{tbK=.{%}ﻤ}}Kz=On)FBMD %86)ԦY!) tF?n ;чIn6~2㧇~Q݅ލ\.^#f.Ws/嚣G[xq46ߓ(7 kÁhD '/-{o\q !4\L) @X@SeĊA2 AH\R,u%u?[r@D7%jjr{@AM'%P|>2$ R\&\Xd+v??Z4h[zWU0ef>ND+֡RbȢ#:wfy1?NLD|@#d(i1N0)eR9/ekxrc C&u#pZi@꽀ydx3dIŇ6o\=e:{ՠ +BiniuK6 oG\CZg /#S(̑OY1R)^?L!n%kLxsxA -g :VZo.6vآhwhS4D9<֍qC KL#Q u6CT3%.aMq)ap"*!8-CIF~pfޜ(.4lz m=5X%CPӄeR&.IS 1 6Ky^qdU{Ka^&y&zz4/0:ڔ@Sw}"[ /||GAҜ?}k B?۸kGW=2V)ǰ04X/V9a@KUƜp0Nڴewڴaᬃ32ȧ ^c-E<"[ ˝TSʸû3scs;l{aJũɴeD3I1) `,f b%evqCZwUOܡ=&[DƐ)Y{ܬ "%SrQ oӊAx֢\9, /.a*x|a3DžEo,WO{ޓ}ޓ"ę(A37p˵{4D*Lkʨf**3-H9 mt,º ՚EXwy ͹M>yNƒK5 H&I16RiK'k̉b̸cgA +& _x/bLHQ}_BEU؉CRxf}( fTln@3P֓̑12B$hklэ&ոAi "  +9NU<`Q`nlȡ7rWvb  PG ^L@ݚs !D:Q.yC#=a i6M ;0Ɲeu[10ˮZs!Z c"11H'qt`ds lpXs?9"fA >!Ny.^x8bNjXZ&v1yMH!瓌@~C Oo}kgO2M!AImXۦ7|g9qep()EIdedWkV`Β,Qhɚw_- \(タ(v2_>V}0SVhsLv];ĥRQjs%BTL"C[($q|ٻmm_ŤxHF`עH/b٦oHX%NEsN<ÈRܘ$AQ{6&^>zٍIǞOO%B0?X>`vǸD PʓR{cҦm1igڇ|ՑnAhqayF|17l8(>"XL R&;v{:tAx*%f}<sRrԹm{AU" ;Qk;ڄzmV&%SrG}#[b]PKadert^s†nCnvaRWְǓ̈hym5C-!YY8qeoB>EvC7l8gXh-]dZ^oB\Oi/vlpK#S&[Ճvp78tVVF-#b9{b6E SК.z9Ӊ 9J]Y lB&;/MkxZc箳`{#τMrA PtK{W6#%u@y@Dj74>ۦDw#y_ܝg6ǥ)66sf6t Ә}{Nҷ}JBLE1eܵ빖K(#W)k\]y ]"-[U_@ʒ:aK,ҒO mS~]K7iK~']UjjsjC|ڰ_7vYA%*Ema*vW5c)襩F[>}Ovqe{\ގݶ]jUrT\. 5Ֆ -x8QZ]) :We>.QZS_})J2K;]"o0yػ{^xhwGAM({-W@lܯ†:YmfxK \1LhpW++.g9}F_!ev|aKuY* 9!6)˪[їp\29^l8UaJxe"8 B쁚+se9薖ߪ-5AɶQy@ .w6r\\a1k!+^> K-3e *?mѭ8Ӥڸ4s{sfHʺ8i wI [K8-SsYyU dᠯ,4OUFNR8%?u~'*CZfX}Rsa䶦f9H،t(T9O$h}EgCbM4uȥmdڿMgd*PB+%WZ\n#'WA-F 2eR5bu6$K`tQT[abQiS˅4 .P\޾d rh(tZ@uVuzf|ݐp9&(:<]%?:ӻ9{óL8p]RoN~cӓ˟OO//>4A7Ќyhʘy?}o^7g\~\?c`]}nq/hc'ND|M½'3dn.]#tIphKd=<&9K:] [Y|@I|+H1GI,;ΠٟRY@Se7i Y{eP8oVn2Ⱥ4洚${ȴ쒈A@jz?ݯgVqt\=n6;_KMm Xdɀ/E tKv{N/zs} `xq f 3G_J%n^SdOu倶lHF%d&lI N32Dn):m&s7`&& ?;-NBdQ}%ķOIELQ$@c"`^@5'`&2 (2Ǎ_~L}Li a-]! 90о_f MكHd]A>ˊ29R腣v WBjPcQ볰BjPj_e@/kIEET@2 +efOH=(gKrقLأwܟOPX_Z0u!]Wj _|)rvMJi4(Dj?5מquRu]*5 `. ռnouA鴯uXC4(M(g3zUչW>auԃ.1{c-N?*_~ZT&M@.VP.i`hu 7$I]ȇi?Aƒ [ʏA{AvoE`Oj6jtoFR<&k?tERaތ:5k9q*E ~-zP26pJN0#(ѱS2ktWHln4Dۇ׾*G㓓u-?`gM[@~02y 䇴UfǘA& C--jݍ) o[:|xBrj (ߪ-d@' `y>ɶ|J&/Ք(O, E[Ϡ&*R e%Y<^^ݍur-cI5YX4:|$}@Si:ś3YCR.<\ؕ/Xg8;}{l*5AW&?Ґg~Ӡ->4 NYY'%Q3 f~C#31Fs0=pq("F!"Y?!)i5A#H#EJBQn Tz^ TMwkqQG^oBssk4wApAnj}_~D}޽%:̶=gQ)G{Iw]7?pݏgT\y|yiJujfZ}3>|>{3ڱ|4sqOHg%ówIE{i U z$dRRfL @ayf^Wk՚yf^WkOfôNQWk敊p|jmbϷB"E P9"r?=1/f|~q6$gr`Xݪu{cF[.'/Q6$6EPӿnDii`E{ءVgc gL~̊/8RhG [^o'-'YV 5Bpe#6kn\G><:Y?tx~](̂\t7H&d;kŸԦ_ir| gO 02,^ 5ﴀ <:[]ajXzasX Ŧf ,pAÈ~҈2*b,"bҹ 7p%d}qi\|~e{G\C\B8*n&oG{vËғn>«NTVaY)TY}5}q 0Ɖvӣ6ua_₅гJ{Qw]H)M%J,6̣{:uw`#[ lVu܇D9y${%T`z),aT`F. Yd C@2p/ᙂ9u#谐µ\? SPgC{pB%hF$V!j==$^^L7^dD&~+_8{%u3= )؁\q-(n38qK[g$9y wygbەѼ95s#@BF}G(t͖<F>dsMf_nL\y<(M/U+s70~I 2EIU Dk\ /\6k? S3y:G8 >h `/w~=C~{ȯ@~\y vHo2 K+DY4}Kq$K {cψrHr8>>"\u.ιA2ɒɻТdAOonจ(۲1XIb)=RAqS2 :[J|Q A' C.r<,9b1<r"]ec0\lq-?rJlߔed˓9Q8(JiO8"wڮ憺8a+[ly2wòL- 7״"o^]Kڍ5j^% l/8!Ls1qѭ>EKmxrF;d4c™ e%гR2|+;ϪoC%{m K/oi\WC3rUIٱiVnII4E4"-><"hs6ЖMD WlPcOw P?_Of6crK/픱L< -N'Ns&WUvMn?֏#ŷ⿭wrصXkd)#Bw ?-M8W zfz4}H1MY΂FC;@<} `^-Z4{2c if htP̤,d (,D`P(-SJ8CLV33_bM2_e}_6}"<5RsrSZzL]/_tK> 议 ߳~SoٯYpH]m"gp L6NX%Y2@AL`ym&8LB&M鸏"0 BL!]՝ ۛ轾\. #Y(Sdb&|ktr@2E p Xb$䏖ekteJHjR8))@\Al<HJ'P攂1[ O.YGHn7kzfsS\) Ty?|w'h*wo]}(0# ?<$|J3z25&o{ݛu_T?٤vK`a~LaB-0 =t;eVaᖁ^d?^5E/#Y {F'gD= 5,A}U_iVT'3/=ѽLj8yIn>2VѕVڟ>8xTfT/%zFgOiÛ27 ,wb aÃs`p4pV[%Y^nOovAxr;i|߼MJnfO4wXzRhMa00wRVRN9r#7zn>L/'WL|8 .˛'i^xbxus[5?_gC{j5-l,`$C)}C܄hѣI ɉd{|n5y$ILY`Xuq30i^ $f'ZF)D̎%UJbB*"wN% 9R䶳( ]!` CǼ#j吁҃M(G(@s j j).&X{yWu+;Ƣ6i 5CyZqʡ5Ṵ2Ր<2-hgt!b ȴî.GCl XU*N YeMjV`w̠,-ZƆ@^$UA3u/Rv ]. a]< Z9Rhla,z!Um:eBI$ƽO)kG񝐘R*2e@,g4}NئRB G?rDrI&HOĕ+G bFis$@? Cl@L)jEw])ww789+7_!'|UҶC3_!e{x igec\_}B0ƛ('/׳&ˋI[01̯k)ɇW\.U|b{w{ߗW?=ٞ٩UL!(.& ptuEm7c4@ўSRf'7Q<D_)Iǔ B6e44#.dg-%2 ]~V/hnP.#x'!cmBQv2C UteMK?Mx~Y^qKhO2sy\7mvqq.vJ^yP'?ϒ;Q!od4   5Gf(mp~BM۳  ,̣S B{.n)gJ+{8(n\\"jʡ:~>yJ,㇤v,Jn+(r'[TR Z?y~$~.;Ybi;N(Prn$DG7 gc<}yrw$CͶ/發?+Ѧ9zw#~z F4U!Ϋz-=6}*Q058g}w[l[[xݝt(ms`܉kZ=PB.r7Gcu4?=ֵDTm'MI{.2&BalϮ vܳS\5\el_c؈te7h#GX#G=Xorn>\ث8l}+xu[ߣrg1`@1 eϷxrDђ%hh$]mZ{Cکz"wVA5 7~(ǿQԙ5S(y*8-*xϮ'#!C0@h%lYٽ=Ċ٠A ޅ{,;3/TXv_],g xYsĈ_g[ x$/r=x}c#J6$Y[y &Lb8,iH?\]^&W\Z w71 ̨̱rg@EsQ8͍CCUWaTj4P;{Unx1p;ceq(W=|Hӑ,p_`n\G˒KL >SnUg:#VM4u&h}^%}iEjlD3mQX Rzi<23!iH9IVppTH&Om\璥%:/Kmg% #[tQHoJgɔM qsH XU5̮ZSYbGMbRS$qANrƀyt<𜼦gxvt%As]uVriOg53M5(jZ$M s:K:d*~Q*m.bQ)d>'ʬuYUlk\ΪGzj$JOrH*8bҫ23x%*IGev LL m)$9eeHuAt/#YSj_p5rĠ1.DOP&T25 p!l"$4 tRJNq@/*r;!zϳ1lH58VV eqۭ{eؔL2etV ɽK--up]ꙛڪveLo@T'YuU"a@я6iGε[U;nkXK_4=@ xef 66^ؤ`1xOlKһ pEbShO1؇@:@J9E1[1 ].թ&wc*,)1KhL)(٪WuJfJ8%-@AzY^4okJ4J) ߢFAMd 5e@W'qTOpPwn}eRIc|Gj-"MAC)@? }2Bb HN ,' /f3c# ^+s33S.Rjk{%VdKV,rЂ!@**SC/LO'_Ѩwm@El o?x})e0[%h52m Y! [#Ps2rfp\M7&WƂ btvdt7r=3݂gbA|şrI(H˚c }sj1U7KKqaaQ *3B6zJ_~[g}OްŃȣ$u{eɈ܄&a=Ac;X0w|)E.>rRɹ|"iPB3Bӫ`[_[z *C6yELeї`Ւl-N\ CAiNT.D"ɝ b1WD__N~큵'}tdGMmvS-k=&z!`=} ߜ}_ XL2v'`S כH3 ~ Ќn^lw/`-m=K[ccr5Λ{QUYJ"3%mji*`i*i$h`ҼkY Ͷδ?B fwn]UgwUr`?L0SgY/|b+ime2+!a]*] @NpZxE< u%y"WhR"0%޹.=f';D~?SQ 0|SDf˾:vXm/qmF`H,ݘrQUdLjfqlþb36,4OBbUIP&LEŧSv19H?zHȉo5]˜u5"0av6owmo#< nAQ _5Bˉy ΚiL7WpG&rX)Ist]-LGd-.OoGbTҕW}Y^@fZ4\Ak7/w\F?T!A[ζ5LW~&a`/C"i]MYEQ6a{<&U%GqJ3lL Fcfrth" eHO/|iPcg3.06~dz>a6_Me 7zN n5aLI30 љ|ǾŃ~3u=mC3p ڿ%Oǯ[3hoh6p8s=|Ϊӓ93N2Ϥ3Κv^V0o3Pf/ua7Acp'Oo>N )Nn;?m9k8@ )kл7\&aܽw·9ӍǗp'Ӕ7÷߿z?{~0i w8}9ĵmmҲŏ.W&Mvff'K)2ӕJwv{2`/f&z^ƲsrxIN7tڇxEwŷ؝w, u+?ix0ݤ68kwe`&;3Q =c9-}b-\;rpM&%MVTJRLzSGrY4ЕbJ} z&%poگܟG7m˷f><`FWӏKɃߎ.o.$wG]v%,LfnGt>n;0Ot A7e;P0]`!)xF&z&}?A`Ф윝S{ǖ4}]D$ :lhnO@%`^q$}9mu L N<_ nMƁ=uIg׽VIQ}Lk$KmN&eOy Ea$_4 ZѴb0}EӰi9B"AVMÇiiX^BX&̸l/YUn )eBVe3 u] $iF@J1۴80 qLx2drH LVa,!Y8.J5ZVD?_ROJ򻈹_sg; br:y lǻBB~c= iZ)izO9WWs; df׿XU "|gLu-`uCw`kQWAWIu,Ro)PLjJr R\b~>W=88.YRq\9\^ Xj*d}MyPᦠip;{1Gmŷijfywθx'Y|B;:O4-MP:V5"Y.?]yJ(KPÍ"j#1Rk~` L},xL"G!$C$0xhd:L2] .mKk0LbnnDm?<=nYc/ҤTю[7&_Q ?pJUQLuvrSv[hi'<0q2 }Pdyjkcn}ho \AcH~ -՜!vꍂE>'0ԆiX+!;"ζ%;+\ᨢ3Zv2 mّPa7[g% ie, )IIh]>87Brhu[ęO8S^W7]UrB[}Ua| 4OJeB Xy:b($q鿲〛ߋOr8\68rƀl;wfz&0D4ɧXŪ"q`Rd4LMuN*E+ H!EY'8Be"kkkSehgG&x'?C}Δ.!f_qJj{3Y+1  󗐙5Cr2e!+q$NҌ|K!r(DHyhO^VK`J{ɦ&ЎҙBAm1Vm<. vRO%Ô9wIcR&iS{3{}KNjdY\>WOo=.7o^.~0Sn?0kEϺA~RCn(8'T_ ~ t qQ-*F#X#qó7A3+|H)KiD'#ޑ- ]Lj,}fc'{l.ӏZ}ҨmcS< ^vNRD-Qi|?^_ Mw^5 TV{<4;urԵau }r.h?ux=o| 6e[Z Wg k@kz̵{=뵫i/B :s0=[wvVQ}{mj֛G^Pɇd5G0ϾS=ϳׅ-Kzb-҇FQn5\[$'krflR{8|JSՁlu%Ԏ8!Mɜ&mzram=!դ*nwcv]{{ вc1 &GȉQJ>#2uD [M cRv&%L6",-L:ǻbO 6<6r@!X9+1PȻ%\T_X|_BܯV;Bbyn㔺F3U~\^k[t C54ݥ?yv=u׷.Wfopy}ڗU?wDh-{(Q\\_ܗ@i'Ae :{zqj_(C_ٻ b= dIcSo U>ϻO^2ru;,Yֹذ`ᄊ@t3Ӌ>>ЬSftBx 53_p!#bRx2Y'!|O'Y:Jh24ISLB}ҐR1fg$Z͔0EPE2Dr=gmݗQq(D`@Ru;kAJ\*smNP&pW[?k@Y9:vh"Zʂ@=>H!rH&T{AB`k6GB2WhRtAW;64[>5nwas'zYEGq}"©sLU ]pUyX' ^7dbQF.U3|1K6 }d`ix ;[]D5\LeB[',R3FHg,+q[ӧѾ(ԀvL.h@AZha"r`XsҶkre9gPpd6Β^CA^:)=ID']د_g4DzWݦEkoN(+5spf?U'~&|SF"4rtye. lv)$ܑ`S]9';is$6r>PV!{dĪ*HW'\4ګ‘S*gl1t@T$ bpc4 X*U `ΤD0(|ق/=R3  jfb ]Xم@e*զY4hP F̀׀9-YivN0J39N6URR`BNW*XV !Y B|8Ӷԁ}Hj=Nx>rg-D`i͇ʊ( ip쟚 ӳɨ}i{Ԍ}Ѱf  _@Se'.VM||e;<9r0'Gځ͘T](wai8i@MZaC#C??_v{BA:A{;aY˞لxsknRxF [5`JJjr")8.=Jr0:MGMwJȕ9h>[_ =څ,$"j楈|ǣ gjt:_%@0K.jqL".&,4WnyPm^d5n= t1 2|dʚv]v'_~1PM5 =x=tc vVexN좊nBk˝QД YHڌSMF9$$4EzGrL_a/Y/,j3U$zѰ{IMuGlӧF(&gshx)(6U9)9S/o/ uEN8 gctmf՝$I2#4'V$#q6Ñ7:,IsHwrHiŝ^kmG {Y}]XϪX mŵ6*ǘ ;$aB; N Ѩcef*ЭBp%^F<ci*i bz 85UNDJ-P(FUVxjVwهW['  wCtOߍ_>)'(WRM'{I;N?jIV[Ul ̿ߦ%C־Ժbv Ow1eD 7#2,2,$6/3|0,yT/V)f83C.rUYId*E2Hy (nԳҎޜ;WgDp.+pAuCu _y5 i=gx Wܴ>{,b5ofQ>[ǿUkZM]\^_c4}ctuasHE7:1;'}TAz/Erk3/\S=|S#=_ mWf(x^ܴ[eٜn5Ot[Mԇw'gǜ!\yv pu{_,U|4hO4WCõ볒<`_V)ƃu'2 ~I;4{?gA/Vjh"Чe1K[?-7>yQ6>Y}v0T)rcRz4JotRKzRZ鈵Z'a EEI[7F9n5\[$ bƦE8N"'M$z=bzj1Z$RINYFi:abjF{h=0c3|j<(w<_ ^''jF$p;`ex(rjUZt@ڽItMlLl9Aw wE/89?yywN@+)ft1 @t^R5L!V }r A^}`wgY>lN@ Ŭ~·r>IFGgTpu{3˨qjiz 9އA9=]>u>P o.M>ޞ~Xm%?Is*~W{½t:=xכOqm|&|ȃv1zxvE͝3&Pt \~g Zi -]IS/q\/ 7٢Ϯhr~c-C+(EW.`BLXN%tש؊rj ACɊFū }]<ɁҳzehFtRs)~4xn@֢14xv'.wmIܝ)|[ց^Q\cܵDT87KJ%)\(EZݝK?|5M *BeƬtn#%{d,%:{zSFc+R|^͢m竴I=v85}skd8}nw_P%pۿ^ 9,@ƠN֥$C@խOk@_A0C~a9%f5 Cc3v1_?T3ue E%+/kR5FIZQڡ5RNviňҬ"+YDsyk`QZM(;CkEp*v`#Oc3ue h6X¼ݍ(VFYX;te0l  K!C src<\o{sB~L 9Q{+VޘaK\W *n SսR[/ڞdˇ9*֫0="ӞRߥ{Z~4m -<:9љ-JSJT˪n*'7:ܲ ƈjH+}}B@Eq ȓYZ+== k0t N٥tp׏&P5ArnˍkX[ zp$dyj}jg=G8Bb"4&Hl}sEn7.xS2:1';Edǿ8]F89@~y!v<uc1? ;pq.hF^o!`PcpSfc}fw,I:Y:7n{yȢVj9} 2{%Ba4(##mX{]|B΅22xkSQ]xYTca)Db0 $\$!HY!1`Y 8pWY X\/'<:>vFyacp;UY0yBs!d S!,Aq[)Yj**LddBhxV.YYV2,JQ R{s1ʶS"a=opN_[ZsEhe*h5}KoGcp4;?_޿:jh< i_ǒHN1RMbL`̎Z=3|?[=|qusCdXyHc f_N}xVךqJ VB21hg37:GwCB"9d靈Fa)pS6ح$nXh)0gFڀ2di1Ѭ6xF\6rI ,=TV]PT7#yiT\{_I];ѷ8`菭ӭp~;ANkXǽu_y0ϓe .Up|wgO .cG èS!'0H%LkecXr0PJZK?:h$K[U_i6bLGK^wɍ˽swr&_"(6eg"լ%2lĒ+m%H$Fx N7 k,5t!w;[ uN96eϴl+T*I{Wm0q`Iى{N(T' %Jja!Fj/1S,U3G[d,&Y r0Uw=Z6{+J.'β;jTeXH(;,Y);ksf'T\l#,ݥa)8TgLս h L9@u><$i`HLͦ[rE`dBؠsK_w֘oD6gJJMT?wdd)DԷ]ZPsܙQ4adnI>8A#[hvUr3ƥBy FZ^3'*KfZ|+rCq< )1F!ck$g$6WccB M4 Mm*g l$NT%'bJXiG+zqǜ҈H0:OI+IѲP%RT(}ohצST7QZwD62l%E&c)\7D $Ks˓YE(ڙGm1mئ؂F 7lkVJ0 HshܾZopH ~@n1E7C(q֫D;Ow;~F 2 Je?=}jB=HthdazYPʃ>Yd#ϦcoC|k{Ř`ߘPZ%XXۃo]N.~۩ǯ@5Il7Fο#:G'5O <&K 0uw9~;M 8>U_fmOڥ^ysv_=NOt&>~w{3Vn{u?p|o_]ͯg{?/]Ҏ;OiwBۛmwMwMW=ӽ _O/| ttCoio֔}9y/ips xe~O;NMŬh04שI>t z4޹d<0E_۶i}5%wg9[1jƳ/T.V+ɫ5p{M$ GOݞLzSpk< ?hY jeTL?`ScKoٯ}{F~ЌݝL=LCvگ Ս]@;I܀[\ Mu']6vxzOGQ%o闋agn?Hv<9L:K2)\>e |Q?Mk >@Aȇ. oivL{7EM+φó`|x7v w6!U $5:cde&ξ /O3YEk 90ڥy3[>m+ruoHYxӪÕ7tdI3R3 n4N]!ٽ,P"m[O"`Qʚ{aQp9 ЇG̏*RkAw]nmT/BEu\mo[.]VL7X^xfDgRjXJ6pMJT+β)TBϰs|Gr{ k㸟}W)`4)  +!&"m\nDp%Εi<ؖ[t\CC-)J3R=|iϓoY1h>~ M#نs--rO׷S_8]_z<\OF-S GuF7I+dV Te iɠQX6"N'K-8O`3'̍{qk8F*ыPNzcKx*BB5 [9'V PE F2JHO-rr)J*ig&Zc`\Mi8_iΞo~A͇0+5 -C]Q;/.JbzM%OU!ؙ O+.qq+SFWÑ UϙPj:l FÛsK\lr|ӣEb B.G[w w1CEtax2']:p"}P[্ƺlݹrPӚGeYiye4d2윜`"%:q7p^kby՘N8AN (($_O0._g@^ =SY&ڒ F(%9VK]z͓7i5[S*|i?lIj$( I7r<!H 9)Xg%ObG uv,8L>%b3+68JyX; "M촒 At"`IJ, -$Ō$(3Jœ$Z'Jn X|w9 47qZ 86&T'T2`Kr"EʁRfҦZ4Jc҂6f/M$e⩉ D8} qX sB (LLle Sp@$&xqJ42S'AXc`*P%*\0[uX$Ա [`%${]c,V` `͹Wc4gI[XrEjP\%hXNIMtlt&['kY[dx"8#HuKQDM*0gHoy4y̙Aȹ0ۃyjYK-tٽgl|l2},Uⲙ,G$)Vv<.z?1V1[c]j\TR3{,k>Gˌel`bDް\hN%8gVelh(jo5 CalcQx꩐TK~*ȘԐCPeK ȟČieuOeq&c6 Fnq=us V*(ުq(v>ц4 .`Rꛦe[s~ c od`AtР Rȅc0Ɲn%51\yr !D;2JM83{ N3e`Z}!p|3`J ͧ:u!C noc$UTs [i8΢a΂)uGz C1OjgEA)VpjlȿD8 8B1}x5a?8i}p~](Ua_ yeUퟢ51]i/'3$ :SV!8,ɽ_J5K3w 66\hޔvF !YxaU~mb'?C5۞M0tZP u:$msO*4^:4Ҍ6'?ѦEm+0Nhyr=d 'aP2GIjp6"Y%dhoL$12L3\uY"zOYi2*fQZkP:KWNQK\aƶxǫ!7nX60OLLJS u[344澏 b gJψ[Kgl0e!7!t}2))X4˞y\"i171g7SkEK{C`wW˕r\9;/d<*=-o_z㚗ra,}VRm-|"3{5QX_Kg3¸^iPxf鷋@*vavа&T)YJQF&%Dm F-j] &YQ̪3~|nheٓGlV\sI*r9?Ih'`xt[|9, 80!rVL'4J U`6#H1׊`(k[LUKnk}m?QJ᜔2ҀR)j^=+kS G )n(g^W߇sA&II&>Gw;rf6Vι/ߺ_-Gӄ_c~[9}ɹ>uq[0&;E$LNQ6: NWϙ[&~='wOi0lpE}&]-~>%%yĨ<$Iru kf0}ë7?f"{CBG[!}OxK֪)4 'i 3c ~r[;RBc^c!sm'7GeX 6r)Sx˖RnѶ1sA(('Y͆k=,SNk.kATzͣ=CkmҽsƸ>ўh֨a?;H5+LDБ!Wl[,Bniy6].E6 uxw5gvvppdDHb y[wd&R,9Zb_X>=9'BQç(;y5J]%[:V=|ETN'sB|zrbB9pTy"x{M@ O]ud2k[UaC\N]rYyMfw׼y=!WRNv*aCʚU Y"o{ t+a5UsTHgn᳽&G.~ ~V=|0 wF;- >lV@<: ws;- >h3]:p:|zry\?}- ӓESbBદi쁢Rk Sq>bdhaI؝HcB">!yjTH@Fʤ;;#`d{0K6t?n(}cA$DO$Yz%/?^i {u}GA,C.q1Oҿk`VA1v_,?3Эͻup{fő;d  qjBQY,NCeZy~ѐn2",e&98)f8WΕn6䘕&*Ӈd+ޫ뵸W4uktݾj }^;-KV41(Q[#/=L:@b2wH`^Jр黺G1~<5Wix[w^v?;qo]I=-Fr_/l Z&?~sO7)lKC?99/ENKR] ]13#bR<Ӂ\BfeTPz^.?c^\][4Zm<t6{yn>\$n>\]i1 71g7SkO7g+l sF=)_"Ap``eb$)]Z=%s6v(@+3RN涧 PCW|Rj*ȶi :-{u=BWыt &qH_^ x'V]$ A}i *Gͧ=->VQZ59 YCLd[IG4k)ʬ 2.jn.5ݤB?cl5|yf!+,٣f73թy撺8{E WGW2#éWoeH%6VOVԄ!; oGqG0,&#&-Zemn9M\sIHԜj*S-~ eV>zkŊ3]$UFwK˹ ]C*rLMIE} IE/-1Z5?Y㣠4qe+ saX(],u5rW$iF\lqM<"œ1ްF4hK@֐ԊyJ]t5hޝ}%8iBU\Osl\dq΅CHF* ھ[,9F2!e>aGGð)<J9n>@-< ="G"+GrR)hBDiCrT`,B3"2it{u=gڕW.xev^5Whsn\SBU? V+k+*t79HGlk _:8]hp^ c2+յ MƫMhT[MXrgάB|eUPrWgZ{_1b`dvvO;hmm{d9E^J]JYԭ &wK<7 ڌ0J1cJQv<<~ltwӭҺ{#ab&!fFi~ i?w IbO KO>:MO'N$,Wo¡92kHpY]xA4h{E>]!I[a,5:UӜRI8tAq̟~_33s(celBTg>+I}5S}P c9KV{m"!M0)nqii<6Aۻnŵƴj U3_%?6F_teiHWC5]6FytNj] 4y%H-Q4AUHjE5"kP};ji}*CSq}{)8(|+yF|LԔT~xhkt#%)w/M(g-tPnɄ5Zت[jcۦr8jUdM7Ji9%GHU'Qc m& "?`} rDG&vR3Hi?'c5oaqÎ _#!$1`^3c0G͘1"0LHd8(vto 9 5CRs5P VY /GFe,.vOΖ q,Ͽ~븜=+ ԙ!Ξ$y45Gv[[O94Y4y4XoªM bJ <\c<`LbASN3<<3r\1+#H3+ׂ+A {,ƺ/3u_Ûc4_fb&$Qk,=|NHy|8'U >vBͤXP',:%F*=Wuq-F|$*Y*h5|,_YEEk5);ݚˉ09w*Az.3.*n]R8JVc T52mV._.8Iq!q_T?ۙח[ D+61v\=F{yVBE{\?.tܧ۷9;O,a9_~b5{z_!Li]];}ȬhɏՒc31'5^%  vdƍaZ 4a0m*捱?`b̛1"smښ 9^ׇyf| h8R+ W`f2xLs93퍟Ea9䜄Сt1sEQ-CZF!J7B"DpEK Q$kլ JRáuBɁ= v ;NqBIcw" GJ4@)Q9ب{|OVOzjdNdI2ߏA" WϹ?S\zrFUK3G9=PaeBxݻ鍲r^QVz8[|eFOJ".iI$[rZ': ljL&qSܙ@G;,LHгxJnǰ ])JHՈ 꾹b%q( _fzGϞXpGNgZ=oB`I2eBJ2`1A9[,@5B]懣3~ uM8 &ڙ; !ZI*`樱09>sc-VE!T 㨴T$Rd %EV( r&`H.00e ~QdrAHؔIlۧ2N39OiEQڨs:ncQ1$6Cy PQ Ǎ ~e)D0?dSJ t"wNvdfyF+ᗂזjRj !Du!x( H>Мp.Df1oxTHhlyPJ Wc~qm/ `MgcݖeKX.yy \9q ]ơS'u_:u"|~X.ַK=txwO˻U WozެoCg|$ ss.g{JPjE ioE$;O2'Th^?vڹ+4#1vWoQ3m] !\@ג;9l#4NrּÌ 1 iNptf!$.5z\/Ol򦩍WÇ^> sp%VIvM^Ob}H1 4Q TW]e| ^.\;P|׀l"4n^|LRj?&)6u幟{RQDM3ߔR9?V{i| th>l5ń96jIn1M;v\V <]/WX:n[*.>_m6ߟONbޥI.n)b12Eso5)Apg?_HH`o-O z^G>,OfI*4x;ٲ>u"ic/ֿHz-qWDn]>Z{1ǰlP>)CqƽSG7Nbt ujQ2Mw mtoE|ᓏ8F77r&j1qonS"`GnW#2G><w>S&-g$s&S6$+eCDkKA"9ԵR.$'.^$B:AzY3h/lwhvW$ƙ)#$q8t^_D{ܥ BFs "1x4`ݒhNy(m(Kˉ#EՕ6)mDT7h9!J4͊e:F%Ci"%2r@3!)`gXXV P-miyN{A,.Ռ_QYI՚[Ui-4r^'ąDt{sa8qw`j__a5X+ambaqdT c$V<B5"RSk5˴֌SK*ۧÎ5&zt)nW/I5Rވ}ZbӲ{0 {Z(9[E VП$M' c" NQ\A("3Kʹa"!enq5.GE.p#b2a7(:.:"-sHK?5o2eb4Vʼn˔RȍnlM\[lzg|9" 23-"j0"vSXf$R4̥S97.j`DPx.GDmlO-R?{WG=f/~ bfzh$h{,KjlO7}Ye9TY"̣C-3 d#h$q(=SrԚOH||a@"=7#Y];9SwwSeIDC76"L+9"OH+XwܑͬՄa!"oڒnz=<ErН)GɽxX$S{B "\$΋O\P>A#gA6$x6L}hC32åFe>#,51Iz`pp"Xq5 ?/ "XbmCiֽ)t{'^ pFaJԏDk~#$͘}9K<[ p`J1Qt!1:n3ҜO,k{FyÔgwP c201oc2&'s'̝A{fLdL52&~@Bh޶7u~_~qY/euWjPq@뻕d\nV\n9`@)U83RQK ;8IDŽ,osM{*(ehߞd@j9ؗ//* Vgy 2)snKK2o|-b3O,ǖslф|^:mpNш'ޢuhzq"&s< l980TKTJȜn:#ʘ5/SZ̼ W1͉b3|0P2&TPJQVJU+ë⏲udXU[js^TPEQrnXY*hnBh#J9c)3XdJJede!Jf@ U)˒8"R#c1X6> ժMg1͍RWXq0ynBn ˺Kst1ƊEgzd.~r1=LnJ~1fiLn(ḮX!crk 2(z>.eOeLnh{襆)|ڗ&-EzˠT ty'a1yTN2_$]t^./Z9gP#b1 4TܣG+; _vO$(F 5c> 3!gў: _Ry.ޜ ;,0DGH-z[bI0 .0#G ڀP@r/5_T |,b¶2|i:.Ǔm%B^8D0e'MvVY3FX@'!mdƠT<[ pFa3c.5|8E:ns"8n-B^8D0Y #҉v1)z_WůUm^ Hq9 α .~yNr_\dTyz.'­#y&W[br,)jNp%>KѵN蒔*tysk##GL2sg|g%5(.3ԴnR3k䷛Em M2/u^~HMflp[W[0Ѣi%o5e Y:ri=%n9-9BeL e3"X憾ҹeM5OS$s IMOu;&͟a?M„E6o+EȒrYJ \Eg"@X[v?/Ktl0 OeQlWbQVPIAr4Hd]h!/MnCS= WČӴ pVeDocL̵Cgj&'S/h_A0%vڲK[v {%v>[A%<(:#yn0{ݶn1mL3[gгyzr7# COG]4'72 5$'6484]h ,SVwO =2u;5W\nѿmcQ΀<;x*B)s;Fh;F\C}nHݎ Pj}Ճ7|Rۧݩs9 UbM^vv3j[BB^Y\J^UJsᦺZx^`UyU?X$w$t8Lm'v/jz 7>lڝX%RGcj]2]_amk/7w\ҎΚ.˦bv[W=?jPfx>o[o~w5uĹߺ+{@nq?= j {d&QG^{sj?y{N K|xB hM0~XO:G7ͮ" /(u;s hq'$4\דrmݪgC3vS)2%2m zP6gѓw&wxIRtl[4ᄍ%C{C)HQS0Qk9F|sa>@uCEg? 1;1 qp9zǹ>;^| '¢C%E(J0k )(={y(.R 4 +c/Y|TrG%(. 6g2 L0>y޸Dt$4*qх撃ڸ!bM X) ʠ$L0a&r;];x)%ښGP᣶J[U .AVsɭ(XBҊZko* gJXΨ=kVB0|fK30g,YMږ‚&QaN1^UY8+BV.S74X!7ZA-906+J#=Xū 7?TTdef\J/Z`YeHV3kg_,ʜŘQvR6tSx]S&k]3B-熾}Ei2԰_v|UoqwA__埾+ъ/ְ*j?6[0I~]\zn}u~jfLj7$+Mn_]\?oԑi%գɆc&I.nۧkFCs~F )}E!-4{uT91E^:{@v (^mV& ^/Wx;.*_KHh๢fEݠLaj \p\Q&E,'5,bԗRwyvzPʥB龉+G;guu k*F{GeMGל$灙000%sy%OF3LecS j/ŷ'~3Ֆ>o>F:ӤAjn虎Fd8iD6OOb$ 6]Neɸiɽ/#>A3urf`&IF5Qr>I܂Nu;L 3AaZS@p < ̆bhcMu'eCܨa>.5r|yysa;lyQ]5\j4_~,ܗZνh.Knm8v6Bnw/m=pFG-wE9hRsKVVtUnx_wI\Jki0tp2 4//ou#3?4>4^%l.U\2]nahQo"p`JrXL(.bmoD/iC9vt/n)!)g贝bmoD9WF23Ytpn)!))ʹ|Tpf}SL'8}wU z>^ݾklmhΎKnwo_߽zo_o?Rڞ) 0QxLGRu}!B~]\wx%D#hxg;\_#$leL,({7 =RA`,FASJL ^drjVVZZeE7JT%V\浤jE,d0,k][>(D鰬Yɹ"%ˢ$Vb:yCgB<]ؾ/  #U2h=g#iT3S ̀Y&ZeynE[>7 | x(s7DK}/5=!=KXܪnzv(NOݙ.L@ɴIݙ\@7\ɫպ]bj5f:Se]3('R[79 rbNq]}sc[n_!"SNdE~݁sD4 у շ׻KB*.G0jnꓹ612KA:a?v0By jFqL$h-I>nj9 udqBsF ~t=%!xJEJLݛ )&!(JkBiffT9*K|r  ̑(K R 2Pek}nH_e)y$P݅Blɫoϑlɒ=9z8TA"_OOwLw@ <|-0o m3}҂ԵL.xbX5ʃ%ቜym0On{'.1eN.PVQ E]Qsym0%i $cо84}GA :M@)hS NJP˕b$Et%4YրjŤRIN"Z!dRc[IiErWYϑH 6疳Kᢲ\M fL RIz8h*p 1؞jnK5N䥕Ay/%2 \t'c| hIZ$Ja :4+(B=Q2%+OHehJP-!T"\eb@RL{kT(|VY $:&d!RP TCNK#9w9MeGfE醎b Od TIJ]\3"L]2:[2_\86,ˇͨVty@i)TwA*ҊjN1sR,p>q_/S-@lR eZJ:EQh)2-愝*>o-eLKYu< -eLK+᧳[K9/R{T0R's JTP ub,OSΜF#Kp(͝I+HqnjUHQT~TsOu#*1f(T2-VXK9'%t\7?JQOPzjnTgykiuYzyZFzZPoBC"6BOjV)M& N! "ӹ텣IJdB2olq`e%zL-f4tjϝ`ěOs* uVOpGNEϼEQ 8A;~292#/=͔B^u&Dei}rt{UO Ç&B65^ b?ZmV[# Bvpω.WGdM b5=) ~m+<bQnjN _ dmGUxxIlOlsM<f\|h, /۫^^ہ}wk,+?{u .C/vt]ѳڊߑ;6Sj}qU?|q6KM?mmi_o+مX+ A{X9f̿xlRCa1}տJw= ?Ǚ跐Zo,GrE3Dϯ 7 D||*9IӒ+FB`8E^چBGphj{K&ԓ` GZxs='U~x}='aܝ+ \ 2j3i݉Jq3"' +ԦݙV;Můxt@:^;eq < +`}pXF]<}7(Ne{V@R8z䏳tARK`>QVrJCkbxȊͰ3z ytzolT0R;A\W' 6>]_޾yţM,PH/fF .ۆ^_!~]t1c*F3>ݭt߷1R"Nq?|oBy~W7J]QJJJhaDU]'_7!؎Jm?VkmvKF"OqW)nF_DZ Z.J4UX +gWufذZ]v98[Qk,.]kq'*]_Q {錧h$b8~kd6ǁU۬oZ|: m$ ]+zZ2Ɇ]Z-m$(`׌%/ "1?ZŔ{v@mn/@T$K&2Z lI&;!9d9u("D& 9囈S+Kopww1"J(sT?o~bӫɫ5^V;o-) ɔH= m"tJ-J'"*.A % B#m>| VbT\rvm϶1mF"=կ=HO2XoqkΦ`crdʙr^yT0DT#5M Ū1/9yk[sg跮ژգ? Y(`W\ǷC]Jm(w߿f_~V4θu#Lv 7=^Q'sDLSr~SRם닿3ǍU,<\˹zd<8i@zE3g\M+3z<ôEpEau~-V+̀X,+>c~nvys+;Ji!ކ_<o9ќV= # m[};v)n<#8EcO7^N.156Mz&4`ɢ8&j*͐#)K)nl"Z)ag=L1\3hŜ=s F~ R݋LJ$f&rYwqRrђ|*D j!Z$ƕ!Pe9c5AkSI9qi![Z jaguo23Z+v|-(uQ0 ^B5"rFrIV5dR5A@ ]_Sm|f"ּy%O΢=%?-W{S4Xmu*Gqf%$פZs7*EhPG:yqe=z"Es˻5wmVx9'JKtoA _f/3ȗu9]⭤\ɈNt8\nrqOJ%V2i4(W*wpN:7|Lbզ/:RY+g~gh>.//׎UJ4 e=j}KlRt.,b,F/ !*_N MH :Jrק7jNx-fyT(j"zg#F2UjˊYMnZ$UT1nrBD`Qa!H-++T \"A^5V\RPY" 0Ds@\|2D2K^ےCdmITuwMҶ$ ےvHRazmຬ- 9$Ε.eY[98 oǃ2* $A]1hGj9(<&fؖ F,pe>v]7%)e=4Z$Fxzӿ&383u$ytxICƋc IТ Yݳ\ D({r~"T`{/[h27`,Vcl{'`u9qc`Gզ=#"xwqI equw|^=,{$$eKM[M$E8#ꪧz.!MXV5Mk!ӥ*WiYa1:jSǢV_Ky| YnmG4գ諧dx2Mg*YeTN:B(c5kij6,zJj5ޕvxGY`̛5e?,=K+?0 9[6\+גƊԁQ.wHcװ)א'!|ya Xqhu de[&̦Y*O,):f1.Zԡ^@ϕ9ֺ@KC2UF|0 i(SAJ3"IFӋOrD{: .IB2DQ+WE{={ r Ag߾O7r*f,3UJF*R>Ջl \I0ȅ6Ho7W_>~z9{*lB"|Jz΋%E^IO3vW5ۖB Stjuķ~f茢7V;#+F-aɡ"7Iwj'|ԧj#MAR&QdfnsEBK'baRӔΥYX`&HM֑͸R`T\ f3Kǂ~H|"U}Jtn ӖZ8誩P-v+(EWkUVQ T ;?t5تnw:SS0%=\)5$y%5jiqcKzU|? RwYL}4ijFɆK“‚6"@IOP.驹g Nz*ZnbtTˢpNDM%MsD )GB9}eZp㼗ZCiqM',Z).*OiMk5ԱI3V)HZ8u}s2Sch^%RJݥW}7eN뤉-h. T3Vn# IlL$ a)vD:6bNȸȠi!5n 7ICa9%NL*Rq ZLr4"Ҕ 0G)AebL"xRVpuu련~u&;%n"UtTO>3uz$ˏj>>fw$J} s}ÓI}}wWN8.g[Qi<DA/'P~._d\/~ּ,}Wٸn1?h6IQ#yHd&U %,љ>(@(:Sy&X\FNQ+ 栜Ր%/|3}l xBf|]T2M(<<=||}R2bY xHnyV@-$ƃe~Y9G1vg۳0+%"TzȳhohLEQnoӶS"mo^ysU)kg_ne~מbL~ؾQt{,xx ~boRBifZog jOjU<5Q*1 ;S= *5(&^0JnvFPzZꫪL)_6J9 C)gdo@4Z)5EFiXA{%f(< F 0.hjWU*M(<꾝0j(=-UUjNS Ra(ejFRa(-vZ=]6J C)WJOI}U)_8J10caړZPz(Ҏ20Rs!'^2JB xhş_s¥dl&+?$+ K> n`q`_tIHuLBtc)wao.̢ջOgF:0I'EZİ& {&AYJ9(I57n[eygQҼ &Q6sE(&i&8`gw^1, Fc.RF NB߬)[ d Jֽ9,RRm @OAYufk[*NV"y 7@y*/7#4Rh>91XmD*4$Ɯ(xLu%j7dCL$-ա0/P-6)n*OsA;sYYF.fu.'Ե'L=`^Bm Y?̾MuJ۶Zξrk|ڬM,޼Sl~"SI]H=<-՞Բ3%u]dR DYXm C mI"&RZXD(cGQ`3qbO@;'B[ B=Zkêټ5B/#JELRP#U:}$ӱһ`3er̞Y^O DhB0K\~Xh%|a\$sAU TZX\U"مa{Bp&MJą%ijyl!ᑫCI|J,`o<(s(`Dž|Cwj O QC̖aRX1Xz@GbienC{#gOՃZjpcOL딐muPk֘1Umz_js;rǨfvBg8%h1\}z.!hX֔ jy5;o9O-d?zh,[|mfV,E(󁺾uE_Wo6SZk9nZlVoZWq\_hBrߡxrC`_Uf<ceqQW>/2fp˔nJ,ι;Bm0dz߻ٖ :mQǻqdL5ӷwk-ݺ@m0W{RR[[@;x3n|0Ww!sbJf|GBy\jě}ٛGxjZ(P& TQ*mJ+eq\W=%՞Z Rm l  xzwη/' t&#k͹.M{}ӏg{pU.WQ耱umA%]̢r鷉QJ'VyMJG){ L;8 t5Lj7v Ӽc$M98)W2?̾w$W^$~- GM3cAy^BHQ[#ߥnMC^$9-?a9=]yߢ2sl2ydH F3y{]{gNKdp/af+n,Wyo9u.AK%YU:5>);H3O;3i)*60& v7=cJIݘg LF3f{j<]WBS-\NJkJis8 z jL%'wxjH  L1VĵU@zSH)>zokN'|a: \0YDŽ=\JmSse3=G`n"9 X MN-UUXa0fnsPB–S1fXy)3S{k ayûĝ늼(@; @㲂Y2D)9E<a2:GUW.mWzHq>_C u_ѣI~fHP}ݽZ{d?2@X]_T; @v@ 3bC;HK#Z>؅DϽHW .L_Ź[),ǐ Wq:ԑ sbq,6aVw&cɹuJ9ކ+052XhDy\ .1.U C*ΌFHd#=Tg:~ H$ޢh60R1ϫo06)o{{m+i4Iz:'-xr~"& -BQٻ6[zc\܊i\o{x ˇ[>Qܼ8xg9ۊAq˳-xʟ+ޯ~v%z-ٻq$U1|?׋Y,nv{=YpHIL`e'e;DIVL EVz0/!'l>|ܾWpfGY}\@%v}CdeU$xry-oX=%BuΑדG\3GC  rIm`.:ɷNf H HXH; h0PgYcވTgޣYW44=c )3P#P H,H? yEr+4!m:f~ԯq[uw[ʤOOե-aJ|Poi 4}G]~w]_xqa.oX I0nR^,f:Cv 5eߞ\ X.MfniB^ǣjƉU&?n`,I`W$!I1Vh70>&Aa АYxK ssF66qy L~:fQ'& 6_\NCst k/r7+tc*)PY}< 4EJ[9H1Mp('xv}L}U}4sQzQ39ck.ڟwMd}+YB(Ԩh3r$EEo# V"í?O1dfvI^mZ5 GAn/k 'w=bQ~&b9xJ)練Vehc\o'yĀ|UmA,Ίe ZGE4JId[, BD'M/Snhڭ y"%Smm--8[, BD'M!o 39vn]H њLbEYv7 \u,agvXYQUu0u0͈,+Yu0$Ipa|ߐ\R*xO0ߜAKѝx?(JX|TFzm[E¸mڋxtk[iW[M :m:a5mJDt#;Sڣc#] T I ң+&Ţ!awEc 0{7mJQ6X)v3,ϥ{Ȗ,y-09Lw=SHʬy!3 JFH#"HT Ñ,+kX3}5SD͝DP^V6mTŒW_bIX%헰%|<,/bJ4Ъ3>$tڞä&+zhFl%DX'# FȐ塻V- Q % 7Hv[vT8>Yh2vE>}nPœ=xڌRIF P_!<jhu6.A-Pe SzF*=!- Shɪh覜9@j%rTs(PDs0saHM6&W7p֢I{ >D1p+&4 #!@<3k[Pe@[‘+6RZiHP J)YА*' )uE#C-+qT9aVp>K ESJ8oċAD@ {g1"V&eB9\P㊂sm9g،+<84V#z>&%LB%3bGmC 1a5e; _j°W kF|Î^94J $A%|h,L(K%”r;*%_/uۻ\=DHq{ u8էc\I`H4 -jU!C^;!`pѫsi=3A ,4 6=>~ pd#>gr0j*B/XSJ2Í/ !rN@݉ÎC,+=ws!_=-с/^*R&4h8ThIeVr)1)1@8:'p& Ek$z9s"ݣ`ʱ=rOϵp#)'$DHlQIQ8,*3-3&Z1\xh(QǙ0H(j-­g(F9di_>˰F-[FGZ֍_ZZ#,{_5Tc, BžICT)cб y"%Sc=IWn4w4n׌;>Q0Pu!!/\DdŹ8lXN;hC"d[E&Sخ\XaX͸ü5ag3nXT`%UJ\TK>L%NxKZʳÄW++^ax0^~㎊d?Thl T:7AhuD&* Ͼy 6?Z"R F0T$Pɣw^ܐq -}SJnޕ NsokQiѫk8hy?p#i2c1m9æy˛2+9@_Q1PCvĵ0!xf*!#̴[=%HyE(#X\`UPvp1Gݭ氻 8Ha~q]IiLT5O,`N&T]*~fk-yhGo*̤߮uHRˆ[@͖bpm3WHǔdd(O@#JƜ@Ni )~mDlKn+ᯐws{;S__;8O=Ren}(2Bm,9XOyMy츝+ \I/痢f 'nS?ܧ+NʛĚ$gɷ?~o.g)Xl1v@?N$FY=Y727/?דd&-.OL:I2ϡrsY;VJOr?rUiy1o77S]i+OlymN l8~Me bbqvޮ?RTX)Q#.V_,%9^%M_κ+̅/gNj,_ˁnޕqd鿢5PuxoƳdA`g:eIuSWMj)T7%#fuw}~u݄|2)7yˋY5L{ ,÷@9u'Mze[? <_^}o1'=QxRD47S( ŋeV"xڈ܇ }X K{ߔhQYqAo/>)C4,KC/|j|Wx׿ٯէ?>GՍ˶狦}]UjW\|)] o?7B@ߏ8^]3\YrU| p쨌yGϧLawW0iQB(|޻r#~dLSٿ:|GUSu~?ۛX՛ ;iv[69s'x_o*X j8f< SUWX @@Wق7@huϿm2 \]&@gxTŹ&7*]z#_8Lz/O]O/ k_Y}~ 7*XX|`79/p$˫4r 5R^ik/U@#j{{ko ;|W-jUͣFS^\FVE8gzJ9X̋0gC^DZ_{t ߞ]v^y p"yzlv^}vW}v!Ԩh5Zӆ=%dE%S!MU̗7y޾}@yI~=PFcu]>1h;L[pmDc$nD{eg!bV=xMk'BGjX?85qڐp4ⴥ'ݰznדBڲ6UZ5ܛ7Ktt6Yaܮ%O7B`7SnWΤ&Vb_ \{+k>|!HK*[_ʴ5[*yXqQ_^l"[U\B2f^mՔn4WSwF'Քu2]sj;ZkujB,Ht^C^ֈ#5h}Ǘ^kmxإ85LsWJMPqίEG |$>ڊvݪ=h!7ݕ{P(W)F+s%a{]}ub!t3aZWrV f˝1f"8qVv"&yOKzah8@~jHO Aׄ7q2i.-OܸJ&k1MiɪQ7ƘZNFV]6ـMpp.ցk&И<LRFHF0 Äwst#j\;y#,ˣK4WEDH#8D$(Mi6$Pܖ-DH*-=MڠJ,i2xe_Yþ&RJ+<* )qxgÉÍ(M {]hV2aIM~Cl]1e0Qzyr\MumNr! #n;[HY5x EHCUO[ou؈ȜԖ=muzfdiA w /XHNzVM9\5If\? Q_ |Hx!p{"<R'r!*ωcXFV*EF}D]`Q[L-/h}WF+bVly0_תٽlfjnWĕpc qs^ >6 [˖]~Ǹ܁˖}ZDaVa|j?օκ)ֵÂL l%rn IiMĻpdA#J6YᴡU!(#Dbtd Fbu $ŒD#{)N*Mzݚw6 Jٓ~aI.{FEA<]Tф̵(3ǽ\jr}Q; ¸< yB:]c-|q1!+P\*Ûg}H]G"S=J鎑U#otgK*&0#eG ' >JUwƧd8W{B[Nc_Bxd2q#4d\$IyCk4aƺE|:#MZI0 (z€?'#&Y95!]yָ(ƵjZ"ґi #葉!XcC5u% h똵"Ι _'t lv\H#ND\H cz`DwƵPMvz*$HiCdȒ9.q?RB:kkwrV"ߝhfvQ"&͐Q6"OX'28T-esJsUCLhw "8$]h`JŮ2p5(AP/'1؇w9|)̩D\<c41!hտ5! φ9> ia.G$+sjG<`,#49%to5352mqN Y)Ⱥb DɎ!Ake-Ύr]rtܧN(5"Ľbi1B(#鯺-|Y&=8brZRpas9AIe8 0_-qi %\EM#%;k:zLH;mMB* Ðns.+UXB WnS  Nڳ1. [THX6 jw`Af=,ss-i; y;Ⱥ1ChHAQsm(94T%$DWk8Y/TæN\W{M!U}Z/K(X]Xջׯ($\JyC)=+N_9ppG,֘oCyx>wA|D!6b)pA&PN3D@k`f Į%ݚU*3Zmx8ER'[8 uy8X%p 1 q<"G r]D$vvymBT[L3`pxr!xYO )(C<Ѓ@ ^RSW9E}gl Q9=&"E=X2K #v_[k8h=) <IـAEHR &+b`m441fۧ*y  qZ#/>As]g @X@$b0{KfS?'\)o( ZPc@E<ͳ҇RUzp4;0$`S催3WT%:ӑ1H:rŇi !O D {A$FkQɷ]Kz7d*=^ĵrnd <{Q0NGV(MVn+ k,P{^"e~rZÏ/x%'}L9ʎE;?f2J `IdtN %ZYpkQDV Pb ʓ3y;1Q¹(+ "G`(p; mc(T"=xXOr"A!-Jk-K@)FKVQ!U8g)J܄+:4b$++5&~iCy;g -zo?E4WCLv>~1zokdB2Y?$v2@4%4H!wPebJI A{܂,?@Wl}\Kx*w"w`{[Wmh}!rIs䍵@ Xx0F <4" ohFA(_).Xr~5YL:ArQNxM{t/Įo|y7C=ǙB Q?7>5!XeEz~j;'hR<^릭S(h*|wHg}!}P㉾@ Ǜ,Y|<Ʊ2Me90x7gт%/]pw2T% 0i9ab~sέd<[= !~9aʺRJM0W%tp(,uH(=Llc|ye*4a9Z3vXD[d>G z;\x>mKx9;__Eϗl __w{5NjA|b_O;MՆѾWrRLCJ,/yuk{e7:~F2*yɱ8ܡj21^q/!&h7'o1G1ݘ9$ +īo!T2 kmVlWY 3 וRTpu5FeO)BbWpǬ2ysiۺ~Nzt?~ڌP9-!`-6NeU~EԽЛ_&ӫ촖r*tKF8]{ <kn҂O 57j}PxL_bmZ1N٩TRib*h:pR:t!ZIX4_ lI[6δ6~0)J@6vr_}Bi 5,㕢ƣ t`_C)9P?o* Ctɻ * ʸH_bTJP=Q𧛆ڌZOflj|a#eeZJY@1rQg o6EneU]\DVhs_ ZT"Z^'rc+e6N-ŜD>ƕ!Wd@CL%FJ\Hk_m E @yL]m ؐdL@6fL= O,squek4rB_9Rj+b s򟃋 `BD+} e=])dpMT NG8t9t̽a<3Ub(B 0+X(hG>[r,xE%ReZZS͘$G-=h-LKr{,\54 H4R*oh`IVj8+"m /JFuCK+ahi:m8Y_"a>=_T &04bL9PfbM#FAp+5;R3_Gk * PWUV~Ŭ). mP)4i4&64WB4ƾRBu *L$qg*Džɵ0 ),GŸ1*Oq[S,tjbSLPJӎz$/G%#JR:tVQ4M5ם'[Sx):a 1q5Qo~}+i1wÝ@B;I+e.p tgKC;c]B*M举6S&v< @C1#T1^9g]QE>Ӓ5r4ofWѾCS6)9EL}{5`1Fr|`43RKI2YI2LI]B&͝r+UD&ĈI[N󕝍T!#T()[l}DBWQsFpC8;285\C͎X#09xsU림ߣR)jI)PS6 "&0bax>z>? s Al_$q8iev(A&Έ6fޑy[Ӄ0Zp"-K12GÔ0߭(Qnazt=}d>@IU`BJT%c /Y\ ǻ$]_YtP¡)k bŻV)|1ɕ|v!zIedp2w*oYpVsj_?^0xUhAkT),? ZJʃB~OӰW?gLX/4fGQUT %Y*0D@"WrE>;uS*` U")!(+[f奎LKY.yDReZխak)eZy_AZr,SrlҁF8bPI>H SK[CrSm0\S;ϻ1EaFx $)Fc $IN ]ѤՊk#\'XHc4$DL Y '\GAHN.zQgyS1XM5Xp@TmljV.r=#EK~.DŽb:S}N5yrZJUR;Qba֩8R^[%Bu'Қj.VVZzZZVPX|wݷz賬9(_6ser/-WmkJtSmK'L2+H$mYs5P0$<3Ak*cXdzOjp8ԶCԭ&,X-Sռ|v]8-\^~2޻~E{mc^F(Z|%AOذ!-#8҂jRAa Zll\`j@XCY7nqp+L&1 G_f[w>>5]]ȍp&ɳӤXZᥗVL ZF%&hcyh&T@ ۫s35Z؉yeF xp,Bxg ðP  Նc>l6eu&Qٳ9ehCںR7*4*4*4*4XU%l,h)!9y-z^g^=֍b6c^7.2M?!E\PcB+QF@d!}sqSwfBI.I9X9zd";O04SofM>ޤ*8w=8{b%dwF;]W.FO#jGaLqATe8}~KBz[f/` vE;Gx+NWk y̞0u 8'SYV^S7Eӧ&ϊ׌Juzge]cn3CyGwYT렶9_vbaܛi w&"W^ίtW/YKu<^>2Wu'/l?{~?hO.Q# IL^8ZΕsҥɂ],/{ ^Ï4aa}{c}ox+[fhy>Q hOr0 ŕTXq߈nB!%Z tB!nR++5X,l)~E;F.ϯzEs^a[.kά;]z1Êw‹ `Ձ5>[Zqul|keZJE}KNhCTS@O2Y,w ZtÖ2Y5TrU\Zډ:CS}Nut҃R˴Tb-݂j)!mE@wgIY _g̼LpȂREi KKʜgIBLRJ#r:(|k3^o+U'El(ST{=7^$1Y,% kdKBwuE%C$Nόn1:EۭnV}"Y G+SPdE7LAFҔȚM 6L6y;/-w!,A[~uk}8͜I@{qyshNZN,o6rF]FHsV̑Jsw P8r=jS0Q,Hb ̞ļ5 9r]_ͧ KP,RpV4,nX3mx3ռh>$!aY]S͉:-- gr;O`։5MpU01[垅;Pz iDaTS/:luMUV` Tp[\.2?k7˛6qw>]ΪEQ9huKD$-WΕ"EyQl0SИb e3ԧy<~cTC7A=(wf^}k6rP:&c ϲ7ܟ~3 sU2RP7}򂌵krx]s0q5^`z<ؚij \ƧE!I\'7;c龯~OH ӫS #B6Fp8-$$4m痟.w'#zQW6)WO7\s7'\⑂c> B|` ɹf˫qV;O,>s ٬#koֲOex>}ZvX-:>hƍ.SZV#g~2QLVD5K]1x4ޛ: Mgk\/?&'×2M\29/%7_b<`LKgC+s+"/) K"3Q9W%*YV|3nr쿗*]p"`~)6b;:H.U%y|%2@?mF8Y2XO%A7~94lg?&, k [mΦ%h2eAmrhJar,&Jw ꀢ39yTBi6b/^x[CW,̹9s4Zϳ*/eAg%Q]|MAv; <`Al:s[DA._jp Wlb*y]q:!8d}IZ^PkHխ0(stm N}H; ocxLO$sfD+ɰЦJKAUԶ4D]G+QUERU"kx-<'13JHPi̬1G%@r*Q81+mRefTP?1 'iINL1#RA(<>0SOoҝW[?u<]qshGK[e753͊ÌLCX1jv9'Q+9'*'5dr([-Ee(+ ړٴTnnQhQVXeUQ1sy. OKK YʂgHU6b jjy"lG?@:>S_lʻ\6^&{jy_'4c^}δ>Pcb|<n [p͔;Vu>k㨵Lv"f1݊(~{tsf_{ٲA)"ϑp^o4^X 0=+i{Y+cK@wO/әOD 0y}z*Xͪl-~vYOX2>u&W_o$+A?r#wV㴺`-$ؐ5jD3bh)jǮJzS Xy*w'F#9@2>^v[sĺ"&nZWD="ӽtn,7.K[==:SYZDk,DύbsH";`YԦt.*af1*+!DUؙd=B J\q1U)TlKADӚYr5c>h~!) ɴOd~%>( =x` qUTАFSs^RP^< g {/9H+udܯNj^iC`O:@( 7F9\mʫ剷o'.OB W@ AB)\Z}LRΩ@`^J$noXGd ! "*4T9WrJ SE!l*I54nYǵE[T^Rp6Llǒ퓕k ȧ-ԙEE0Fl.LPUS!Kcٌ S+7O\9#ȅNO` F b2V4$3b.*X)Ya { ԘNY QeZB!-r(t(m^wq;(-S$kC?|pJ*M`{0- eOf#HŲ9ţZ b}PnIJy &fhȴ3hֶzw Mw^#OIh4yHj%rNRֿd@ F+@~^/0*rjE5Jr O[OFnQ;?C3V| Ҵ]ۇ+S,AF6ATUU}@4m]/P 4QOPZ[iں{hӣ{=ѴATs?H6ѴO}+1u\#jԽ3 T9 kaL 'ҏ'Z@2OzzK ^ l ϖ* :ϷpI6ƨ#,|3 ݵU`¨28b";d͔n6}&dHA B7ڰ|LS8G' y&dS~n* 2QQ~h?9aJj 12366ms (15:19:01.412) Jan 28 15:19:01 crc kubenswrapper[4656]: Trace[647631526]: [12.366709148s] [12.366709148s] END Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.412442 4656 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.417113 4656 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 15:19:01 crc kubenswrapper[4656]: E0128 15:19:01.423369 4656 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.424722 4656 trace.go:236] Trace[1866615913]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:18:51.294) (total time: 10130ms): Jan 28 15:19:01 crc kubenswrapper[4656]: Trace[1866615913]: ---"Objects listed" error: 10126ms (15:19:01.420) Jan 28 15:19:01 crc kubenswrapper[4656]: Trace[1866615913]: [10.130015331s] [10.130015331s] END Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.424750 4656 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:01 crc kubenswrapper[4656]: E0128 15:19:01.451854 4656 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.216074 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:24:51.171924567 +0000 UTC Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.717049 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.718110 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.720819 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" exitCode=255 Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.721043 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3"} Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.721381 4656 scope.go:117] "RemoveContainer" containerID="c80831597489860182070ea4c6f6734b2feca0011557f863624a9181f66fa7c2" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.721628 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.722855 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.723276 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.723377 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.725387 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:02 crc kubenswrapper[4656]: E0128 15:19:02.725869 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.965225 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.114449 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.129281 4656 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.217795 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:30:58.486037769 +0000 UTC Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.725501 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.726924 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.728181 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.728287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.728300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.729061 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:03 crc kubenswrapper[4656]: E0128 15:19:03.729306 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.732443 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.222017 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 15:25:50.849908215 +0000 UTC Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.614195 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.730500 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.735683 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.735771 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.736274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.737536 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:04 crc kubenswrapper[4656]: E0128 15:19:04.737856 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.328709 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:48:15.814490988 +0000 UTC Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.718739 4656 csr.go:261] certificate signing request csr-c58gn is approved, waiting to be issued Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.733794 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.735143 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.735207 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.735222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.736267 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:05 crc kubenswrapper[4656]: E0128 15:19:05.736505 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.795503 4656 csr.go:257] certificate signing request csr-c58gn is issued Jan 28 15:19:06 crc kubenswrapper[4656]: I0128 15:19:06.329688 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:42:59.813209511 +0000 UTC Jan 28 15:19:06 crc kubenswrapper[4656]: I0128 15:19:06.934747 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 15:14:05 +0000 UTC, rotation deadline is 2026-11-15 11:41:44.619582128 +0000 UTC Jan 28 15:19:06 crc kubenswrapper[4656]: I0128 15:19:06.934951 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6980h22m37.684635685s for next certificate rotation Jan 28 15:19:07 crc kubenswrapper[4656]: I0128 15:19:07.330033 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:14:37.0372205 +0000 UTC Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.331449 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:26:42.911557512 +0000 UTC Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.343070 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.343979 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.346382 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.346560 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.346638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.348354 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.423777 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.426282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.426349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.426365 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.426595 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.704350 4656 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.704795 4656 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 15:19:08 crc kubenswrapper[4656]: E0128 15:19:08.704854 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.719557 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.719993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.720151 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.720264 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.720345 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:08Z","lastTransitionTime":"2026-01-28T15:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:08 crc kubenswrapper[4656]: E0128 15:19:08.752758 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.758466 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.758718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.758796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.758895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.759001 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:08Z","lastTransitionTime":"2026-01-28T15:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.996909 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:08.998357 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.110443 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.110539 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: E0128 15:19:09.141745 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166270 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166373 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166409 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166431 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:09Z","lastTransitionTime":"2026-01-28T15:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.331930 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:09:09.238054842 +0000 UTC Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.754761 4656 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:09 crc kubenswrapper[4656]: E0128 15:19:09.779742 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786007 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786042 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786067 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786076 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:09Z","lastTransitionTime":"2026-01-28T15:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:09 crc kubenswrapper[4656]: E0128 15:19:09.799629 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:09 crc kubenswrapper[4656]: E0128 15:19:09.799780 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802756 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802773 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:09Z","lastTransitionTime":"2026-01-28T15:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.905905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.905969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.905980 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.905998 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.906010 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:09Z","lastTransitionTime":"2026-01-28T15:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009073 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009147 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009193 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009221 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009281 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.112692 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.113031 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.113056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.113087 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.113382 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217198 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217213 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217248 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.331498 4656 apiserver.go:52] "Watching apiserver" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.332251 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:10:15.464287738 +0000 UTC Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335422 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335479 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335538 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335568 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.340320 4656 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.341437 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rpzjg","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-node-identity/network-node-identity-vrzqb","openshift-ovn-kubernetes/ovnkube-node-kwnzt","openshift-dns/node-resolver-c695w","openshift-machine-config-operator/machine-config-daemon-8llkk","openshift-multus/multus-additional-cni-plugins-854tp","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-image-registry/node-ca-55xm4","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.342417 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.343335 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.343385 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.343512 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.343346 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.344067 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.344197 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.344488 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.344649 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.344757 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.344793 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.345123 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.345350 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.345131 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.345422 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.346595 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.349142 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.351964 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.352161 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.353270 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.353401 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.353976 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.354011 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.353981 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.354577 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.354865 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355084 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355334 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355613 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355761 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355638 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.363281 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.366883 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.367718 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.368140 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.368761 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.368786 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369066 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369380 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369540 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369670 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369905 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.370350 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.370644 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.371033 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.371318 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.372231 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.372598 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.377491 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.396220 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.398669 4656 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.403451 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.422642 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440451 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440498 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440508 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440553 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.447300 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.459463 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.459824 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.459974 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460077 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460192 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460312 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460427 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460543 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460649 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460756 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460860 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460982 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461102 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461205 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461310 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461415 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461504 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461597 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461674 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461747 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461818 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461955 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462121 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462229 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462319 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462405 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462482 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462562 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462693 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462807 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462881 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462953 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463065 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463198 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463344 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463425 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463491 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463578 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463650 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463718 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463789 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463879 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463960 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464032 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464105 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464243 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464358 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464443 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464546 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464630 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464704 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464773 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464854 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464935 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465059 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465155 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465261 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473386 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473427 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473456 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473479 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473504 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473527 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473546 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473569 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473588 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473607 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473625 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473649 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473668 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473698 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473722 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473742 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473762 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473784 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473802 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473824 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473842 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473861 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473882 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474054 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474077 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474094 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474110 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474127 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474144 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474164 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474649 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474675 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474693 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474711 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474728 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474747 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474764 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474786 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474802 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474824 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474844 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474862 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474880 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474900 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474918 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474935 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474954 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474973 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474990 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475008 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475026 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475045 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475073 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475093 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475111 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475130 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475152 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475174 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475394 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475411 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475429 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475446 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475463 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475481 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475498 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475515 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475534 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475552 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475576 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718851 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718931 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718961 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718983 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.716110 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461441 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461434 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461696 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461717 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462120 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462770 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462972 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463122 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463324 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463756 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463920 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464036 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464314 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464783 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465107 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465565 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465612 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465839 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465846 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.471243 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473318 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.706741 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.707063 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.708073 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.710073 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718597 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718629 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718933 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718937 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726361 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.719191 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.719286 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.719883 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.720713 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722237 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722288 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722449 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722633 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722817 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722932 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.723584 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.723648 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.723891 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.724138 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.724304 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.724392 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.724751 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.725863 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726009 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.725880 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726598 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726632 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726762 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726862 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.727033 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.727122 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.727334 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.727639 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728008 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728255 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728375 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728425 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728527 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728577 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728622 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728667 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728762 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728988 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.729223 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.729713 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.729901 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.730153 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.730601 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.730706 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.731122 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732454 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.731440 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.731588 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.731921 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732052 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732310 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732511 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732962 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733276 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733359 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733514 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733578 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733775 4656 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733978 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.734166 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.734261 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.735068 4656 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.735218 4656 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.735295 4656 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.735329 4656 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.735492 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.235326713 +0000 UTC m=+41.743497517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.735553 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.735615 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736425 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736490 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736527 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736548 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736834 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737040 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737300 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737510 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737297 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737710 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737896 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738035 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738116 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738121 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738092 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738074 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738362 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738476 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739086 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736489 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739479 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739514 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739538 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739560 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739583 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739607 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739631 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739660 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739656 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739872 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740028 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740019 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740212 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740418 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741340 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741380 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740643 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741405 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740686 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741432 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741455 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741480 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741500 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741560 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741588 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741616 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741642 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741672 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741700 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741733 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741773 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741806 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741843 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741861 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741766 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742716 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742768 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742794 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742826 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742854 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742879 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742921 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742954 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742999 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743022 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743045 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743078 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743199 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743225 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743244 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743263 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743291 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743321 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743339 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743358 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743418 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743445 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743471 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743501 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743530 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743550 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743569 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743589 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743648 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743674 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743699 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743724 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743755 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743780 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743810 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743897 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-hostroot\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743921 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06d899c2-5ac5-4760-b71a-06c970fdc9fc-proxy-tls\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743939 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743958 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-system-cni-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743988 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e54aabb4-c2b7-4000-927d-c71f81572645-serviceca\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744009 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-bin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744043 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744075 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744094 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744141 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744192 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e54aabb4-c2b7-4000-927d-c71f81572645-host\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744340 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744367 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744384 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744414 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-multus\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744437 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-tuning-conf-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744461 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-k8s-cni-cncf-io\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744487 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-daemon-config\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744508 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5l8q\" (UniqueName: \"kubernetes.io/projected/e54aabb4-c2b7-4000-927d-c71f81572645-kube-api-access-c5l8q\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744538 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744559 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-cni-binary-copy\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745276 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745336 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-os-release\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745447 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-os-release\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745511 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-conf-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745692 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745877 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745938 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746014 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746072 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/06d899c2-5ac5-4760-b71a-06c970fdc9fc-rootfs\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746110 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-multus-certs\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746151 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746198 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746224 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746473 4656 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746996 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747044 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-cnibin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747073 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-netns\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747120 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tb2m\" (UniqueName: \"kubernetes.io/projected/06d899c2-5ac5-4760-b71a-06c970fdc9fc-kube-api-access-2tb2m\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747139 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmdq\" (UniqueName: \"kubernetes.io/projected/8f9a9023-4c07-4c93-b4d6-9034873ace37-kube-api-access-qcmdq\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747184 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747208 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-system-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747236 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747299 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747485 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748943 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749024 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749057 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749149 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-hosts-file\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749228 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4pbr\" (UniqueName: \"kubernetes.io/projected/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-kube-api-access-r4pbr\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749259 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-kubelet\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749308 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l84dh\" (UniqueName: \"kubernetes.io/projected/7662a84d-d9cb-4684-b76f-c63ffeff8344-kube-api-access-l84dh\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749339 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06d899c2-5ac5-4760-b71a-06c970fdc9fc-mcd-auth-proxy-config\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749387 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749451 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749485 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749516 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-cnibin\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749544 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-socket-dir-parent\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749617 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749724 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749772 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749801 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749824 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749846 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749908 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749935 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-etc-kubernetes\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749967 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.750003 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-binary-copy\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751261 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740696 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740788 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741637 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741698 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.777817 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.779107 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741871 4656 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.757925 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.781803 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.781895 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.784231 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.787913 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.788316 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.788469 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.788631 4656 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.789343 4656 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.789569 4656 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.789694 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.789829 4656 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790001 4656 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790139 4656 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790470 4656 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790592 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790723 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790830 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790948 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790962 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791122 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791141 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791164 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791222 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791237 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791252 4656 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791281 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791293 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791306 4656 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791321 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791343 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791362 4656 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791373 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791383 4656 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791394 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791405 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791415 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791425 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791441 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791459 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791473 4656 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791486 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791500 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791513 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791526 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791536 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791545 4656 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791556 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791567 4656 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791577 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791587 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791598 4656 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791610 4656 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791619 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791629 4656 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791641 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791652 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.773945 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791827 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791844 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791855 4656 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.786453 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741900 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.783197 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741921 4656 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741944 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741963 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791972 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792017 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792038 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792082 4656 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792097 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792109 4656 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792124 4656 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792163 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792193 4656 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792206 4656 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792218 4656 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792231 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792243 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792283 4656 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792300 4656 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792323 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792341 4656 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792353 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792363 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792373 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792382 4656 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792392 4656 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792402 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792413 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792423 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792434 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792443 4656 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792453 4656 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792466 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792476 4656 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792486 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792495 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792507 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792516 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792529 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792553 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792562 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792572 4656 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792582 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792610 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792619 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792641 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792654 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792665 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792681 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792705 4656 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792717 4656 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792729 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792741 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792757 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792770 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792785 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792796 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792807 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792821 4656 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761885 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741988 4656 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742010 4656 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742284 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742312 4656 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742332 4656 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742353 4656 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742377 4656 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742403 4656 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742427 4656 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742460 4656 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742488 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742472 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742514 4656 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742539 4656 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742923 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743254 4656 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743282 4656 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743302 4656 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743334 4656 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743356 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.794376 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.795933 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743468 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743675 4656 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743698 4656 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743718 4656 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743739 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743805 4656 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743935 4656 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743968 4656 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743998 4656 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.744380 4656 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: very short watch: pkg/kubelet/config/apiserver.go:66: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744640 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745212 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.745481 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.796446 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.296411632 +0000 UTC m=+41.804582616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745741 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747361 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747437 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747769 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748021 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748202 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748331 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748892 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748950 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749030 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749045 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749375 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749495 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749731 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.750308 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.750676 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751042 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751252 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751387 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751947 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752196 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752156 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752424 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752533 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752965 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.753077 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.753127 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799027 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.299005127 +0000 UTC m=+41.807175931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.753613 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.755814 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.755598 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.756063 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.756226 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.758239 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.758526 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.758747 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.759503 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.759747 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.759931 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.760443 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.760472 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761033 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761144 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761246 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761740 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.768680 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.768843 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.772664 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.773211 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.777525 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741827 4656 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.780197 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.780382 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.780412 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.781050 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.782096 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.782348 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.783011 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.785299 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.785685 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.786431 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799420 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799437 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799508 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.299499291 +0000 UTC m=+41.807670095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.786590 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799531 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799538 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799563 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.299556933 +0000 UTC m=+41.807727737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.786592 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.786921 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.786984 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.787457 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.787690 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.787835 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.802819 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.805809 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.806376 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.809155 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.814905 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845175 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845231 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845247 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845281 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.893890 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.893963 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.893991 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894017 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894057 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-hosts-file\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894084 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4pbr\" (UniqueName: \"kubernetes.io/projected/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-kube-api-access-r4pbr\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894109 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-kubelet\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894131 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l84dh\" (UniqueName: \"kubernetes.io/projected/7662a84d-d9cb-4684-b76f-c63ffeff8344-kube-api-access-l84dh\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894153 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06d899c2-5ac5-4760-b71a-06c970fdc9fc-mcd-auth-proxy-config\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894215 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894238 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894264 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-cnibin\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894290 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-socket-dir-parent\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894312 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894331 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894352 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894373 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894396 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894416 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894438 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894461 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-etc-kubernetes\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894484 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-binary-copy\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894506 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-hostroot\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894527 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06d899c2-5ac5-4760-b71a-06c970fdc9fc-proxy-tls\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894548 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894572 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-system-cni-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894593 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e54aabb4-c2b7-4000-927d-c71f81572645-serviceca\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894614 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-bin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894634 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894658 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894680 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894701 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e54aabb4-c2b7-4000-927d-c71f81572645-host\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894733 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894755 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894780 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-multus\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894801 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-tuning-conf-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894823 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-k8s-cni-cncf-io\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894844 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-daemon-config\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894864 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5l8q\" (UniqueName: \"kubernetes.io/projected/e54aabb4-c2b7-4000-927d-c71f81572645-kube-api-access-c5l8q\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894887 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894911 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-cni-binary-copy\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894933 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894953 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-os-release\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894974 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-os-release\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895001 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-conf-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895118 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895144 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895195 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/06d899c2-5ac5-4760-b71a-06c970fdc9fc-rootfs\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895216 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-multus-certs\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895265 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.896945 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-cnibin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897146 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-netns\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897195 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tb2m\" (UniqueName: \"kubernetes.io/projected/06d899c2-5ac5-4760-b71a-06c970fdc9fc-kube-api-access-2tb2m\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897208 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-socket-dir-parent\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897247 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897228 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmdq\" (UniqueName: \"kubernetes.io/projected/8f9a9023-4c07-4c93-b4d6-9034873ace37-kube-api-access-qcmdq\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897385 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-system-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897759 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.898331 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-cni-binary-copy\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899450 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e54aabb4-c2b7-4000-927d-c71f81572645-host\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899492 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899494 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/06d899c2-5ac5-4760-b71a-06c970fdc9fc-rootfs\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899525 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-multus-certs\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899550 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899601 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899679 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-os-release\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899924 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899964 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899996 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-etc-kubernetes\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900263 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900316 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-cnibin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900343 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-netns\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900678 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900721 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-kubelet\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900857 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-system-cni-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900913 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.901219 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.901947 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.902065 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06d899c2-5ac5-4760-b71a-06c970fdc9fc-mcd-auth-proxy-config\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.902155 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.902329 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-os-release\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.902098 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-conf-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903329 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-cnibin\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903600 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903654 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-system-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903664 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903700 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903681 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903750 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-hosts-file\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903792 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903813 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904262 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-tuning-conf-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904351 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904433 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-hostroot\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904644 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-k8s-cni-cncf-io\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904707 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-bin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904745 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-multus\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.905139 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-daemon-config\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.905904 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-binary-copy\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906369 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906450 4656 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906499 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906514 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906535 4656 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906550 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906563 4656 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906578 4656 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906602 4656 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906617 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906631 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906647 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906667 4656 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906681 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906697 4656 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906713 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906757 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906771 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906784 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906801 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906815 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906827 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906839 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906856 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906869 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906882 4656 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906895 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906911 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906923 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906940 4656 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906958 4656 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906996 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907009 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907022 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907039 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907052 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907065 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907077 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907095 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907108 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907123 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907135 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.909924 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06d899c2-5ac5-4760-b71a-06c970fdc9fc-proxy-tls\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910219 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910278 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910311 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910326 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910352 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910368 4656 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910388 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910404 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910424 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910438 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910465 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910484 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910499 4656 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910513 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910527 4656 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910546 4656 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910561 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910575 4656 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910693 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910712 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910730 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910760 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910777 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910798 4656 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910813 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910826 4656 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910847 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910862 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910876 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910891 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910909 4656 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910923 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910940 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910954 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910977 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910994 4656 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.911008 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.911026 4656 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.911146 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.912747 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.920106 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e54aabb4-c2b7-4000-927d-c71f81572645-serviceca\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.924153 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4pbr\" (UniqueName: \"kubernetes.io/projected/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-kube-api-access-r4pbr\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.924547 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.926506 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tb2m\" (UniqueName: \"kubernetes.io/projected/06d899c2-5ac5-4760-b71a-06c970fdc9fc-kube-api-access-2tb2m\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.926933 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l84dh\" (UniqueName: \"kubernetes.io/projected/7662a84d-d9cb-4684-b76f-c63ffeff8344-kube-api-access-l84dh\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.929015 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5l8q\" (UniqueName: \"kubernetes.io/projected/e54aabb4-c2b7-4000-927d-c71f81572645-kube-api-access-c5l8q\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.929807 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmdq\" (UniqueName: \"kubernetes.io/projected/8f9a9023-4c07-4c93-b4d6-9034873ace37-kube-api-access-qcmdq\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.929817 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.949988 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.950048 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.950061 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.950083 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.950098 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.982834 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.005770 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.015735 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.262628 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.263260 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.263600 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.263572579 +0000 UTC m=+42.771743383 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.263971 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.265511 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rpzjg" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.266026 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.266231 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.266743 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268912 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268938 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268947 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268967 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268981 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.271133 4656 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.273635 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.274823 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.275605 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.276923 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.277636 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.279573 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.280400 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.284740 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.285640 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.288004 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.288714 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.293268 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.294235 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.295342 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.296727 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.297481 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.302779 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.303299 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.304073 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.305452 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.306004 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.356272 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:16:16.238890098 +0000 UTC Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.365296 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.365352 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.365388 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.365415 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365560 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365650 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.365619848 +0000 UTC m=+42.873790652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365649 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365675 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365688 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365718 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365786 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365806 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365868 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365738 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.365729161 +0000 UTC m=+42.873899975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365931 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.365903156 +0000 UTC m=+42.874074040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.366041 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.365946028 +0000 UTC m=+42.874116932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373211 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373227 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373237 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.383091 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.383779 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.386947 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.387490 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.389106 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.390151 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.391305 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.392191 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.393504 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.394125 4656 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.394323 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.396952 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.397853 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.398346 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.400910 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.401692 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.402884 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.403679 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.404870 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.405571 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.407073 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.408010 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.409496 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.409960 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.411340 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.411870 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.413609 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.414263 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.415575 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.416118 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.416738 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.418015 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.418695 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.519839 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.521225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.521265 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.521297 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.521314 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.605259 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.615841 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822654 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822709 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822728 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822774 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: W0128 15:19:11.823029 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode54aabb4_c2b7_4000_927d_c71f81572645.slice/crio-cc9d8cfbf446cbaf3e80c3cceb68a76c7505c7014eeff1e4d4ea235e5f070344 WatchSource:0}: Error finding container cc9d8cfbf446cbaf3e80c3cceb68a76c7505c7014eeff1e4d4ea235e5f070344: Status 404 returned error can't find the container with id cc9d8cfbf446cbaf3e80c3cceb68a76c7505c7014eeff1e4d4ea235e5f070344 Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.829398 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.829686 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.829866 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.830086 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.830276 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.830715 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.832023 4656 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.836771 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.837034 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.849121 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.868236 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.898808 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.912948 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.926815 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.929123 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930804 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930855 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930866 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.954868 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.955464 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.957524 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.976783 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.982995 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.983313 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.998620 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.998820 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.003684 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.013301 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e9759344c4c166b923c1743cb258cc7c03bb1ea2642fb069b4d977cb647473df"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.019733 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c695w" event={"ID":"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e","Type":"ContainerStarted","Data":"76a08d4387fa80a30cf597984eda9385818fdac0faecc9abbae2bb50e2f3bc11"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.021561 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"99f3f58926f9f5145244dbe6e9acfd081f57a6d5e67d0fa71fb1124101e0bee2"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.026397 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.035714 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.041483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.041732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.041991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.042105 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.042123 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.043458 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.047564 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"734b0af208a3f29c981557a5245daa4fc21a51ab4cf5fa8e015542be87200193"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.048020 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.054625 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"caa6333e4a167c18cc52a674e48e299d4c281867ef0c741bf7e1ba8c6827b13b"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.054820 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.057999 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.075037 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.075545 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.075939 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e36be3e25a4ba99df8f847b1f2ae59ff2459bed7baaa6170a8facef2f0bccccf"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.084237 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-55xm4" event={"ID":"e54aabb4-c2b7-4000-927d-c71f81572645","Type":"ContainerStarted","Data":"cc9d8cfbf446cbaf3e80c3cceb68a76c7505c7014eeff1e4d4ea235e5f070344"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.088639 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"b1f1c83f234543b1908577b7478154042fb10bfd05989386ce47719a52921c15"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.091050 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"4cef9cbb1b627ac09954266d05140f1667515bb328c389ef233a1a83526f93ca"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.091493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.107368 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.110357 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.125608 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.142529 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.145860 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148918 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148975 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148987 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.161999 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.170243 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.170772 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.170999 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.171225 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.171470 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.171554 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.173201 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.181794 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.205188 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.221553 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.224376 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.236034 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.244318 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.258712 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.260587 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.271370 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.274033 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.287999 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.301688 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.315093 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.316725 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.318983 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.330725 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.330993 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.330952355 +0000 UTC m=+44.839123159 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.331064 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.348320 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.357363 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:38:43.760629462 +0000 UTC Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.371048 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.390939 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395659 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395701 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395731 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395740 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.464101 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.464158 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.464214 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.464251 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464430 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464459 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464473 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464547 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.464530332 +0000 UTC m=+44.972701136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464939 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464970 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.464960615 +0000 UTC m=+44.973131419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465043 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465060 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465070 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465099 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.465090068 +0000 UTC m=+44.973260872 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465191 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465229 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.465220002 +0000 UTC m=+44.973390806 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528629 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528646 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668073 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668089 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668100 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.814976 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.815588 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.815603 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.815623 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.815633 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947619 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947685 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947696 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947722 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947735 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052205 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052217 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052238 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052250 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.097821 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.102010 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.102038 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.107417 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-55xm4" event={"ID":"e54aabb4-c2b7-4000-927d-c71f81572645","Type":"ContainerStarted","Data":"1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.109633 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c695w" event={"ID":"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e","Type":"ContainerStarted","Data":"e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.111047 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.116303 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" exitCode=0 Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.116534 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.118649 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.121163 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.122046 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.200832 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202711 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202758 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202773 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202811 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.304896 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.305254 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.305269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.305289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.305304 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.318239 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.333615 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.344665 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.358819 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:43:56.343253021 +0000 UTC Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.361206 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.372616 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.390870 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408935 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408963 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408985 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408999 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.414828 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.431207 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.451807 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.471982 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.530833 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.531955 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.531977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.531986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.532006 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.532016 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.552411 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.573124 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636724 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636765 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636776 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636792 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636803 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.637590 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.647156 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.665814 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.678667 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739736 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739795 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739819 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739832 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.740926 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888032 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888070 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888082 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888126 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.889147 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.908370 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.953239 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.965659 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991256 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991470 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991553 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094340 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094368 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094377 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094409 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.137076 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.137177 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.170898 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.170937 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.171018 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.171235 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.171552 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.171836 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.196970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.197426 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.197558 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.197654 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.197741 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.299999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.300059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.300094 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.300113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.300126 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.405139 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:25:53.766067188 +0000 UTC Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.405309 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.405596 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.405567193 +0000 UTC m=+48.913737997 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407309 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407481 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407567 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.519605 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.519687 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.519725 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.519751 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.519936 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.519964 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.519961 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520000 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520016 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.519981 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520019 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520105 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.520072191 +0000 UTC m=+49.028243165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520133 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.520122242 +0000 UTC m=+49.028293256 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520154 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.520145073 +0000 UTC m=+49.028316087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520260 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520315 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.520289587 +0000 UTC m=+49.028460391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523528 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523570 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523582 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523621 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.636112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.637126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.637259 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.637453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.637608 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795182 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795219 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795229 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795258 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795268 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897099 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897151 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897187 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897208 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897219 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004707 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004782 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004812 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004825 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108793 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108861 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108873 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108918 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108933 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.142916 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69" exitCode=0 Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.142978 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.147071 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.147265 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.147350 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.155653 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.166818 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.211193 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.226345 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277266 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277299 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277327 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277336 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.328590 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.347880 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.374696 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387512 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387563 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387594 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.392061 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.406312 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:07:35.114964891 +0000 UTC Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.408565 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.427234 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.444873 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.458189 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.487729 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510344 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510376 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510415 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510501 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.525022 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.544268 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.581217 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.598898 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.618915 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.631423 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673848 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673897 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673909 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673929 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673942 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.679022 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.780568 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.782899 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.782951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.782971 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.782998 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.783024 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.805227 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.821859 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.837033 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885359 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885403 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885412 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885439 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025422 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025464 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128815 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128843 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128853 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.170997 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.171018 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.171051 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:16 crc kubenswrapper[4656]: E0128 15:19:16.171262 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:16 crc kubenswrapper[4656]: E0128 15:19:16.171401 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:16 crc kubenswrapper[4656]: E0128 15:19:16.171745 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.200365 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.202974 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.219500 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.230999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.231245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.231321 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.231385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.231442 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.236437 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.304387 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.316320 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.331914 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.333961 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.334132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.334250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.334360 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.334458 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.366200 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.380832 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.407075 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 07:09:27.632144362 +0000 UTC Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438737 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438855 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438902 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438942 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542069 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542116 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542131 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542194 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.560468 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.606492 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.724514 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783698 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783738 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783748 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783765 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783776 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.797832 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.815990 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.886649 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.887017 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.887100 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.887194 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.887270 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004141 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004191 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004204 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004221 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004233 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107310 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107320 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344523 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344583 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344595 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.407926 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:08:17.011675535 +0000 UTC Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454632 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454641 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.610826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.611183 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.611196 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.611213 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.611224 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714211 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714249 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714260 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714291 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817413 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817523 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817540 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817560 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817571 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921105 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921188 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921205 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921237 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921256 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.024954 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.025002 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.025011 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.025027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.025036 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.127970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.128020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.128040 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.128064 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.128081 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.348818 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.349026 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.349552 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.349648 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.349717 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.349779 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352413 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352489 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352506 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352549 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.359718 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6" exitCode=0 Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.359768 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.377453 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.438540 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:00:26.122520887 +0000 UTC Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612194 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612391 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612456 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612523 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612563 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.612690 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.612661727 +0000 UTC m=+57.120832561 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.612836 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.612894 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.612880673 +0000 UTC m=+57.121051507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613389 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613502 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.61346868 +0000 UTC m=+57.121639674 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613522 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613547 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613566 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613567 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613583 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613596 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613614 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.613598224 +0000 UTC m=+57.121769058 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613644 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.613629214 +0000 UTC m=+57.121800058 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617413 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617461 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617526 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.630774 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.708360 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797910 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797944 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797958 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797967 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018830 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018847 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018861 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.053862 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121480 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121516 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121540 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252555 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252587 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252596 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252617 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.256379 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.279738 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.302955 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.321493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.336648 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.431517 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435814 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435847 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435859 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435875 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435887 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.437833 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d" exitCode=0 Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.437894 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.444014 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.469118 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:42:52.214739941 +0000 UTC Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.484941 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.668338 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671310 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671368 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671399 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.704555 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.742434 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.761069 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.781008 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.819753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.819813 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.819824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.819849 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.820296 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.823802 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.889531 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.911807 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931909 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931947 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931958 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931975 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931990 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.932718 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.953649 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.972636 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.988010 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.005371 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.235488 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.235541 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.235621 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.235688 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.235796 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.235908 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.238933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.238985 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.239006 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.239026 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.239039 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240633 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240690 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240756 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240768 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.260102 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.261350 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272452 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272803 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.302478 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312834 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312915 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312933 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.353773 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363206 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363276 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363329 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363354 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.391525 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396896 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396907 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.417121 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.417405 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419792 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419821 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419829 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419844 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419852 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.452937 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f" exitCode=0 Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.453382 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.469822 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:54:17.063899497 +0000 UTC Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.480359 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.503680 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.522294 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.523280 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.523982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.524020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.524043 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.524072 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.542261 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.569687 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.585893 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.602525 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.615561 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626723 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626758 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626773 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626782 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.632698 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.653898 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.695360 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.723360 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730389 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730430 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730442 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730461 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730473 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.740209 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833506 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833546 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833563 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833608 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936464 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936501 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936526 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936536 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.038606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.038920 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.038990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.039054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.039114 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141319 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141371 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141404 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.188624 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.214501 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.237493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.254668 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.271301 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297389 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297434 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297446 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.304658 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.320285 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.365473 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.384963 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400475 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400521 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.402446 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.418433 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.684398 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.685426 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:14:53.40887575 +0000 UTC Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.700505 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720256 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720667 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720728 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.741510 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.744328 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.744690 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.758138 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.767363 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.776640 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.778999 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.779133 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.779194 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.781484 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.799667 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.001953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.001982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.001991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.002035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.002052 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.016628 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.057781 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.062024 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.076889 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105426 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105444 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105457 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.156962 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.165920 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.170496 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:22 crc kubenswrapper[4656]: E0128 15:19:22.170657 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.171119 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:22 crc kubenswrapper[4656]: E0128 15:19:22.171236 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.171306 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:22 crc kubenswrapper[4656]: E0128 15:19:22.171353 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.190890 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.208878 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.208962 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.209005 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.209027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.209039 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.211966 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.228722 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.245643 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311625 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311633 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311647 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311657 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.322827 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.337008 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.354234 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.374010 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523929 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523959 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523971 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.528872 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.545056 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.660774 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.661972 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.661991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.661999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.662012 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.662021 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.685029 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.685944 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:26:48.17449602 +0000 UTC Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.699685 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.714406 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.726682 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.752227 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764362 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764397 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764411 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764422 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.785526 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.818793 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.841887 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891807 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891846 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891858 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994461 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994499 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994535 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994549 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098032 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098071 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098085 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098095 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201194 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201257 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371484 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371526 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371539 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371558 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371572 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.686532 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:09:40.109157128 +0000 UTC Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734496 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734521 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734533 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838521 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838567 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838582 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838618 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.964265 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.964502 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.964762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.964966 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.965049 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068213 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068263 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068294 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.170736 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:24 crc kubenswrapper[4656]: E0128 15:19:24.171069 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.171439 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.171544 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:24 crc kubenswrapper[4656]: E0128 15:19:24.171723 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:24 crc kubenswrapper[4656]: E0128 15:19:24.171881 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173365 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173445 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173488 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293363 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293371 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293387 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293400 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395378 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395447 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395480 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497592 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497622 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497630 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497644 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497653 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600437 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600494 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600545 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.687032 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:06:45.889028746 +0000 UTC Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703883 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703926 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703989 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.793488 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516" exitCode=0 Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.793545 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806534 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806612 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806690 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806759 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.819249 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.838828 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.851662 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.869454 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918451 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918504 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918545 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918555 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.925800 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.946551 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.963485 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.975791 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:24.991615 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.006911 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.026134 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.045655 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.069564 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169684 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169721 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169771 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272231 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272305 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272356 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376204 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376265 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376305 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376321 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479374 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479407 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479418 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479435 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479446 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581909 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581947 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581958 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581974 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581984 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684911 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684921 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684938 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684949 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.687635 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:09:43.419383191 +0000 UTC Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788409 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788456 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788467 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788499 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.805271 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6" exitCode=0 Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.805357 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.850472 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.867987 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q"] Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.868918 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.871575 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.873742 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.873802 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7112154f-4499-48ec-9135-6f4a26eca33a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.873856 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.873896 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mq85\" (UniqueName: \"kubernetes.io/projected/7112154f-4499-48ec-9135-6f4a26eca33a-kube-api-access-2mq85\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.874933 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.877609 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.902796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.903312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.903449 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.903550 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.903741 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.904690 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.918638 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.938987 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.955914 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.970560 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.974986 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7112154f-4499-48ec-9135-6f4a26eca33a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.975151 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.975275 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mq85\" (UniqueName: \"kubernetes.io/projected/7112154f-4499-48ec-9135-6f4a26eca33a-kube-api-access-2mq85\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.975392 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.976068 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.976618 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.985191 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7112154f-4499-48ec-9135-6f4a26eca33a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.004700 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006407 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006421 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006441 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006453 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.009344 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mq85\" (UniqueName: \"kubernetes.io/projected/7112154f-4499-48ec-9135-6f4a26eca33a-kube-api-access-2mq85\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.022060 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.038465 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.054408 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.069019 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.084185 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.101304 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118563 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118617 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118642 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118652 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.127940 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.139270 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.152599 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.167453 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.169698 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.169876 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.169990 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.170057 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.170095 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.170135 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.184846 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.186988 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.210299 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: W0128 15:19:26.210282 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7112154f_4499_48ec_9135_6f4a26eca33a.slice/crio-61dafdc870de9a07c8341c1b0a9e5097abc71526c434f6f7e08d951f55f35521 WatchSource:0}: Error finding container 61dafdc870de9a07c8341c1b0a9e5097abc71526c434f6f7e08d951f55f35521: Status 404 returned error can't find the container with id 61dafdc870de9a07c8341c1b0a9e5097abc71526c434f6f7e08d951f55f35521 Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222374 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222383 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222624 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.225202 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.242615 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.259935 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.273778 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.290296 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.315848 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326140 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326151 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.328644 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428723 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428808 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530916 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530929 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530946 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530957 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635267 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635331 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635420 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833284 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:32:15.049657887 +0000 UTC Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833697 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833816 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833847 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833876 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833899 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834066 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834023116 +0000 UTC m=+73.342193920 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834105 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834122 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834135 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834152 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834190 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834204 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834207 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834194171 +0000 UTC m=+73.342364975 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834235 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834243 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834230942 +0000 UTC m=+73.342401956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834284 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834276234 +0000 UTC m=+73.342447038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834309 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834348 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834337015 +0000 UTC m=+73.342508029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836428 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836441 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.856523 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.866847 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" event={"ID":"7112154f-4499-48ec-9135-6f4a26eca33a","Type":"ContainerStarted","Data":"c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.866919 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" event={"ID":"7112154f-4499-48ec-9135-6f4a26eca33a","Type":"ContainerStarted","Data":"544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.866931 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" event={"ID":"7112154f-4499-48ec-9135-6f4a26eca33a","Type":"ContainerStarted","Data":"61dafdc870de9a07c8341c1b0a9e5097abc71526c434f6f7e08d951f55f35521"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.880684 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.904023 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.919513 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.938589 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939330 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939340 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.964654 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.984046 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.018710 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.042638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.042969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.043084 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.043225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.043301 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.056427 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.073065 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.091787 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.165975 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.257803 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.257864 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.257891 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.257930 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.258154 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.351919 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364757 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364790 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364805 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364825 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364846 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.369732 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bmj6r"] Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.370526 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: E0128 15:19:27.370615 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.378653 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.395797 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.420295 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.437573 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.453461 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468301 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468351 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468369 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468411 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468433 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.476155 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.500470 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.525330 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.548894 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572598 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572676 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572707 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572738 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572751 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.573823 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.573866 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhrdd\" (UniqueName: \"kubernetes.io/projected/11320542-8463-40db-8981-632be2bd5a48-kube-api-access-rhrdd\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.577213 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.610051 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.625351 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.695873 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.723386 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.820639 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.843051 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.859703 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.972647 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:02:21.89988513 +0000 UTC Jan 28 15:19:27 crc kubenswrapper[4656]: E0128 15:19:27.974047 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:27 crc kubenswrapper[4656]: E0128 15:19:27.974144 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:28.47412334 +0000 UTC m=+58.982294144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.974647 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.974711 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhrdd\" (UniqueName: \"kubernetes.io/projected/11320542-8463-40db-8981-632be2bd5a48-kube-api-access-rhrdd\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.977885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.977923 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.977933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.977953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.978001 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.005270 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhrdd\" (UniqueName: \"kubernetes.io/projected/11320542-8463-40db-8981-632be2bd5a48-kube-api-access-rhrdd\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.082377 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.082701 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.082817 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.082926 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.083040 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.170026 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.170139 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.170280 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.170719 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.170798 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.170874 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223307 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223351 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223361 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223382 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223394 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409804 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409843 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409857 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409874 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409883 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.504754 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.504963 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.505027 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:29.505009032 +0000 UTC m=+60.013179836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513326 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.615676 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.616192 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.616261 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.616324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.616382 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720626 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720738 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720755 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861175 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861214 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861224 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861242 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861262 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.868012 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.883534 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.899133 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.919534 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.944712 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965063 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965083 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965097 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.973811 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:01:38.659119029 +0000 UTC Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.978352 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.003833 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.026430 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.046513 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.067100 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.069618 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.069750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.070057 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.070369 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.070577 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.088301 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.104863 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.135493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.168842 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.170843 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:29 crc kubenswrapper[4656]: E0128 15:19:29.171101 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174184 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174201 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174216 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174256 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.187686 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.205607 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.260760 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.277916 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.277972 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.277993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.278017 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.278031 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381125 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381499 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381568 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381637 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381697 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484726 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484776 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484792 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484802 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.516076 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:29 crc kubenswrapper[4656]: E0128 15:19:29.516546 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:29 crc kubenswrapper[4656]: E0128 15:19:29.516875 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:31.516828967 +0000 UTC m=+62.024999771 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.588094 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.588749 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.589001 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.589119 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.589236 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.692042 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.693123 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.693218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.693293 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.693418 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.796949 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.797453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.797617 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.797704 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.797786 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.899987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.900028 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.900039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.900058 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.900070 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.974293 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:57:14.573005916 +0000 UTC Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.997625 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/0.log" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002375 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002477 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002093 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002093 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82" exitCode=1 Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.003427 4656 scope.go:117] "RemoveContainer" containerID="ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.089086 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106233 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106275 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106302 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106316 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.108586 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.125185 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.138395 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.153260 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.170499 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.170499 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.170873 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.170999 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.171190 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.171247 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.179864 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:19:29.141399 5844 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:29.141521 5844 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:29.141539 5844 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:29.141554 5844 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:29.141562 5844 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:29.141589 5844 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:29.141597 5844 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:29.141605 5844 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:29.141627 5844 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:29.141655 5844 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:19:29.141673 5844 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:19:29.141699 5844 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:29.141704 5844 factory.go:656] Stopping watch factory\\\\nI0128 15:19:29.141722 5844 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:19:29.141749 5844 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.194368 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213443 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213524 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213540 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213580 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.217330 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.233422 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.245873 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.258832 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.274559 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.289539 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.304838 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317138 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317191 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317217 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317229 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317994 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.333067 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420574 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420631 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420667 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420681 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526426 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526490 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526502 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629874 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629915 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629930 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629940 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658412 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658425 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658446 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658460 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.675751 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681408 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681473 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681507 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681521 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.696716 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702198 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702310 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.716944 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722216 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722239 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722264 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722278 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.791831 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796810 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796864 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796894 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.817757 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.817940 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819696 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819725 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819736 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819766 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922233 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922264 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922295 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.975019 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:45:06.046163316 +0000 UTC Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.009720 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/0.log" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.012581 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.013196 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.170076 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:31 crc kubenswrapper[4656]: E0128 15:19:31.172261 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181331 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181366 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181492 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.380317 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391839 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391927 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391946 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391960 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.407457 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.439446 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:19:29.141399 5844 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:29.141521 5844 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:29.141539 5844 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:29.141554 5844 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:29.141562 5844 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:29.141589 5844 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:29.141597 5844 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:29.141605 5844 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:29.141627 5844 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:29.141655 5844 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:19:29.141673 5844 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:19:29.141699 5844 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:29.141704 5844 factory.go:656] Stopping watch factory\\\\nI0128 15:19:29.141722 5844 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:19:29.141749 5844 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.454684 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.472758 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.495974 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.496030 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.496045 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.496065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.496082 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.506421 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.526884 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.543577 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.563921 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.582100 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:31 crc kubenswrapper[4656]: E0128 15:19:31.582466 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:31 crc kubenswrapper[4656]: E0128 15:19:31.582590 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:35.582555759 +0000 UTC m=+66.090726563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.590136 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598901 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598928 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598938 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.608350 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.624587 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.646995 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.661412 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767721 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767752 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767769 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.773457 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.791414 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.811667 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.831870 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.848096 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.861430 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877598 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877677 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877694 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877761 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.882981 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.898540 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.917495 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.938558 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.957481 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.975803 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:17:41.628824194 +0000 UTC Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.976338 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981449 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981497 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981511 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981529 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981543 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.995102 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.014766 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.018322 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/1.log" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.019139 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/0.log" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.022310 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" exitCode=1 Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.022350 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.022455 4656 scope.go:117] "RemoveContainer" containerID="ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.023442 4656 scope.go:117] "RemoveContainer" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" Jan 28 15:19:32 crc kubenswrapper[4656]: E0128 15:19:32.023646 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.045470 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.068493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085326 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085627 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085644 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085654 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.090674 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:19:29.141399 5844 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:29.141521 5844 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:29.141539 5844 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:29.141554 5844 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:29.141562 5844 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:29.141589 5844 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:29.141597 5844 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:29.141605 5844 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:29.141627 5844 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:29.141655 5844 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:19:29.141673 5844 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:19:29.141699 5844 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:29.141704 5844 factory.go:656] Stopping watch factory\\\\nI0128 15:19:29.141722 5844 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:19:29.141749 5844 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.104187 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.118065 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.132275 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.149048 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.161938 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.170075 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:32 crc kubenswrapper[4656]: E0128 15:19:32.170287 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.170284 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:32 crc kubenswrapper[4656]: E0128 15:19:32.170397 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.170320 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:32 crc kubenswrapper[4656]: E0128 15:19:32.170511 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.178729 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187582 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187601 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187615 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.206495 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:19:29.141399 5844 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:29.141521 5844 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:29.141539 5844 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:29.141554 5844 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:29.141562 5844 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:29.141589 5844 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:29.141597 5844 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:29.141605 5844 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:29.141627 5844 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:29.141655 5844 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:19:29.141673 5844 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:19:29.141699 5844 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:29.141704 5844 factory.go:656] Stopping watch factory\\\\nI0128 15:19:29.141722 5844 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:19:29.141749 5844 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.220863 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.238453 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.258367 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.275601 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.289809 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290623 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290668 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290679 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290715 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290725 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.306304 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.321852 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.339885 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.358951 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.374216 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392721 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392757 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.495605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.495862 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.495936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.496031 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.496134 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598745 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598926 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702283 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702339 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702378 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702393 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.804941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.804969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.804976 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.804990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.805000 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908521 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908628 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908674 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908759 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.976482 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:47:14.236610142 +0000 UTC Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012125 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012206 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012241 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012256 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.038221 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/1.log" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.044675 4656 scope.go:117] "RemoveContainer" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" Jan 28 15:19:33 crc kubenswrapper[4656]: E0128 15:19:33.044925 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.064570 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.090734 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.104057 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114909 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114920 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.118266 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.141355 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.153244 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.168867 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.169781 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:33 crc kubenswrapper[4656]: E0128 15:19:33.169982 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.184417 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.198234 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.211482 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216924 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216942 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216961 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216975 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.225133 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.238906 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.254704 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.330939 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332443 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332517 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332550 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.347538 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.357887 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.435970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.436016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.436029 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.436047 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.436073 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538591 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538658 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538736 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538803 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.641948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.642001 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.642011 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.642064 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.642082 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745242 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745299 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745335 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745348 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.847886 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.848393 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.848488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.848581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.848695 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.950607 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.950911 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.951000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.951102 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.951194 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.976975 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:28:01.638590261 +0000 UTC Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.053871 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.054219 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.054348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.054486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.054589 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157577 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157598 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157614 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.170202 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:34 crc kubenswrapper[4656]: E0128 15:19:34.170352 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.170759 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:34 crc kubenswrapper[4656]: E0128 15:19:34.170816 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.170959 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:34 crc kubenswrapper[4656]: E0128 15:19:34.171177 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.259936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.259986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.259995 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.260009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.260019 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363368 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363411 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363427 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363463 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470374 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470756 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470842 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470969 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.573666 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.573887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.573983 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.574068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.574129 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677232 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677596 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677694 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677833 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677938 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781214 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781273 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781315 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.883991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.884035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.884083 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.884105 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.884117 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.977846 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:45:23.047760213 +0000 UTC Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987131 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987144 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090248 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090304 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090314 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.172286 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:35 crc kubenswrapper[4656]: E0128 15:19:35.172497 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193647 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193701 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193764 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297026 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297089 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400331 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400343 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502592 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502623 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502653 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502674 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502686 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606643 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606664 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606704 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.658017 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:35 crc kubenswrapper[4656]: E0128 15:19:35.658216 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:35 crc kubenswrapper[4656]: E0128 15:19:35.658320 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:43.658294099 +0000 UTC m=+74.166464903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709757 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709847 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709901 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812712 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812764 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812776 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812819 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812837 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916148 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916226 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916237 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916261 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916276 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.978564 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:14:21.094545982 +0000 UTC Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020570 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020585 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020603 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020615 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123364 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123375 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123426 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.169974 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:36 crc kubenswrapper[4656]: E0128 15:19:36.170136 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.170392 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:36 crc kubenswrapper[4656]: E0128 15:19:36.170452 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.170794 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:36 crc kubenswrapper[4656]: E0128 15:19:36.170844 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226377 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226411 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329121 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329158 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329181 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329228 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432439 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432474 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432487 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.535956 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.536009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.536021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.536043 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.536082 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638775 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638814 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638844 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638855 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742103 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742183 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742201 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742226 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742240 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845207 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845260 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845273 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845304 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948435 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948446 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.979037 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:35:58.090592785 +0000 UTC Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051494 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051626 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051643 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051662 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051675 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154444 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154455 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154473 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154495 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.170184 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:37 crc kubenswrapper[4656]: E0128 15:19:37.170375 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.257993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.258043 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.258056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.258073 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.258089 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360858 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360892 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463450 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463560 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463572 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566103 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566146 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566191 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566201 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668859 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668871 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668891 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668905 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772013 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772090 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772106 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772141 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874651 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874701 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978190 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978230 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978241 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978263 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978283 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.979736 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:24:20.300539262 +0000 UTC Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.080977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.081020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.081033 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.081053 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.081067 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.170285 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.170347 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.170400 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:38 crc kubenswrapper[4656]: E0128 15:19:38.170435 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:38 crc kubenswrapper[4656]: E0128 15:19:38.170516 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:38 crc kubenswrapper[4656]: E0128 15:19:38.170611 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184334 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184346 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184370 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184384 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287106 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287146 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287173 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.389974 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.390028 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.390041 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.390061 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.390073 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493188 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493235 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595850 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595897 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595910 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595948 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698100 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698134 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698146 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.726579 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.745224 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.759846 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.774002 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.788765 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.800426 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802404 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802432 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802468 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.815527 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.836135 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.855152 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.873633 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.889883 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904565 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904621 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904631 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.907140 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.920563 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.934197 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.948276 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.968643 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.980532 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:25:59.275136153 +0000 UTC Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.984788 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007359 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007395 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007432 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007446 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110232 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110283 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110317 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110329 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.169934 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:39 crc kubenswrapper[4656]: E0128 15:19:39.170134 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213435 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213482 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213493 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213522 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213534 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316519 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316589 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316605 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420322 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420332 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420352 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420363 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523441 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523470 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523480 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523495 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523504 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626318 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729106 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729115 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729131 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729141 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832301 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832332 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832345 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935077 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935128 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935150 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935190 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.981523 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:10:20.309217243 +0000 UTC Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040116 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040147 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040179 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143394 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143451 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143471 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143483 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.170587 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.170656 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.170734 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:40 crc kubenswrapper[4656]: E0128 15:19:40.170774 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:40 crc kubenswrapper[4656]: E0128 15:19:40.170948 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:40 crc kubenswrapper[4656]: E0128 15:19:40.171035 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245733 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245857 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245877 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245888 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349175 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349228 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349241 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349267 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349280 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451741 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451752 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451770 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451780 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553656 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553699 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553715 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553726 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656205 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656242 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656288 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758780 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758821 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758832 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758862 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861806 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861849 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861861 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861888 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861901 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965128 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965146 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965180 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.982475 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:48:30.293803464 +0000 UTC Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002306 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002338 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.016720 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022404 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022452 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.040995 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062028 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062087 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062100 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062122 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062136 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.079589 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084612 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084650 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084662 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084694 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.101021 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105024 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105064 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105082 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105105 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105119 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.119347 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.119514 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121412 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121448 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121459 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121474 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121484 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.169929 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.170097 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.205445 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.218885 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223609 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223636 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223646 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223660 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223670 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.244487 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.258866 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.274070 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.291881 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.306972 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.323915 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328335 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328352 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328365 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.338566 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.351115 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.367974 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.380330 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.395607 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.411961 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430763 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430787 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430797 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.431727 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.446035 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.532968 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.533284 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.533300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.533317 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.533328 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635398 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635426 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635437 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635462 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738337 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738367 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840244 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840613 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840710 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840797 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943033 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943143 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.982607 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:31:25.129144662 +0000 UTC Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045878 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045909 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.148960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.149019 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.149036 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.149059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.149071 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.170402 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.170630 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.171065 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.172190 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.172478 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.172883 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.251473 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.251812 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.251907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.252004 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.252092 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.355575 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.355879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.356030 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.356122 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.356232 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.458991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.459298 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.459385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.459519 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.459602 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563023 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563067 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563101 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563113 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665797 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665837 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665849 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665867 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665879 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770531 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770580 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770637 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770647 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.837651 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.837794 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.837835 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.837875 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.837977 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.837937675 +0000 UTC m=+105.346108499 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838058 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838106 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838106 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838127 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838137 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838151 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838194 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.838182832 +0000 UTC m=+105.346353806 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838215 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.838204753 +0000 UTC m=+105.346375737 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838153 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838236 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838254 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.838247334 +0000 UTC m=+105.346418138 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.838077 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838300 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.838280985 +0000 UTC m=+105.346451969 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873497 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873541 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873550 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873568 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873581 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.975967 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.975997 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.976035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.976051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.976060 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.983664 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:10:58.627013704 +0000 UTC Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078899 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078967 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078981 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.170669 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:43 crc kubenswrapper[4656]: E0128 15:19:43.170860 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181190 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181203 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181235 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.284725 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.284965 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.284982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.285009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.285024 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388648 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388692 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388704 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388725 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388738 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491712 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491764 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491812 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595002 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595067 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595107 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698733 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698780 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698808 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.750326 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:43 crc kubenswrapper[4656]: E0128 15:19:43.750534 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:43 crc kubenswrapper[4656]: E0128 15:19:43.750626 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:59.750602131 +0000 UTC m=+90.258772935 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802454 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802511 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802524 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802545 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802560 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906397 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906409 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906427 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906443 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.984782 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:16:26.101065302 +0000 UTC Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009428 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009499 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009512 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009548 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.170229 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.170285 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:44 crc kubenswrapper[4656]: E0128 15:19:44.170384 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:44 crc kubenswrapper[4656]: E0128 15:19:44.170548 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.170597 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:44 crc kubenswrapper[4656]: E0128 15:19:44.170676 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216042 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216099 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216130 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216144 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318259 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318270 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318297 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421249 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421257 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421283 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523475 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523508 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523520 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523538 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523549 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626709 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626743 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729075 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729090 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729100 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831800 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831849 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831860 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831891 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934703 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934756 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934770 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934804 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.985586 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:44:30.978594684 +0000 UTC Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.037913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.037964 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.037973 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.037989 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.038000 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140267 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140342 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140353 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.170120 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:45 crc kubenswrapper[4656]: E0128 15:19:45.170401 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243155 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243192 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243203 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346381 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346423 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346438 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346448 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449810 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449828 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449841 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554603 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554616 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554919 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658662 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658672 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658700 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761802 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761820 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761834 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865785 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865827 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865837 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865863 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970694 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970789 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970802 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.986415 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:29:09.2161276 +0000 UTC Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.074992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.075068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.075080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.075100 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.075112 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.169648 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:46 crc kubenswrapper[4656]: E0128 15:19:46.169817 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.169835 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.169865 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:46 crc kubenswrapper[4656]: E0128 15:19:46.170329 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:46 crc kubenswrapper[4656]: E0128 15:19:46.170449 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.170575 4656 scope.go:117] "RemoveContainer" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177556 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177591 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280859 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280916 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280927 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280959 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438325 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542342 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542427 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645546 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645587 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645616 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645629 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748700 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748782 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748793 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851629 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851690 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851705 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954143 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954155 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.987456 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:42:09.731628355 +0000 UTC Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057190 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057219 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057229 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.101004 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/1.log" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.103755 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.105189 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.143375 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160115 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160624 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160638 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.172576 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:47 crc kubenswrapper[4656]: E0128 15:19:47.172760 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.178321 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.197677 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.219616 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.237780 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.252393 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263716 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263763 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263780 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263790 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.270696 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.306586 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755188 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755224 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755269 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.797877 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.817677 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.833154 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.855842 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859072 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859217 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859379 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.875920 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.889999 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.911013 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.078273 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:26:51.299213437 +0000 UTC Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081378 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081496 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081507 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.170108 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:48 crc kubenswrapper[4656]: E0128 15:19:48.170345 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.170601 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:48 crc kubenswrapper[4656]: E0128 15:19:48.170720 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.170894 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:48 crc kubenswrapper[4656]: E0128 15:19:48.170943 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261129 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261239 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261251 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366307 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366325 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366627 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469846 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469904 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469923 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469937 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573478 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573548 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573579 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573593 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677130 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677334 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812502 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812553 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812589 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812604 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915657 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915755 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915775 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915809 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915827 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018499 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018538 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018567 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018578 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.078598 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:22:48.692741952 +0000 UTC Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121647 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121680 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.170392 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:49 crc kubenswrapper[4656]: E0128 15:19:49.170608 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224834 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328258 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328267 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328301 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430425 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430439 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430449 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534629 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534647 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638186 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638239 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638251 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638285 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740777 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740787 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740811 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843898 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843955 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843967 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843998 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946246 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946321 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946451 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946467 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.079465 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:32:22.224956047 +0000 UTC Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081230 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081327 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081355 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.122139 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/2.log" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.122961 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/1.log" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.126337 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" exitCode=1 Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.126383 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.126769 4656 scope.go:117] "RemoveContainer" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.128210 4656 scope.go:117] "RemoveContainer" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" Jan 28 15:19:50 crc kubenswrapper[4656]: E0128 15:19:50.128844 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.149715 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.164070 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.169984 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.170008 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.169984 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:50 crc kubenswrapper[4656]: E0128 15:19:50.170106 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:50 crc kubenswrapper[4656]: E0128 15:19:50.170200 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:50 crc kubenswrapper[4656]: E0128 15:19:50.170278 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.183924 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.183966 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.183974 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.183991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.184018 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.186660 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.199132 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.215965 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.232358 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.245105 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.261790 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.278068 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287004 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287047 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287091 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.296227 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.313078 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.334064 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.352189 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.358857 4656 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.369551 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390512 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390584 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390664 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390682 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.397112 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.416502 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493442 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493470 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493484 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597182 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765345 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765364 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765401 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765426 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869251 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869318 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972651 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972696 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972706 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972724 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972741 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076802 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076891 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076904 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076928 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076945 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.080334 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:35:47.164441914 +0000 UTC Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.134044 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/2.log" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163833 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163877 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163928 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.170639 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.170755 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.189942 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.191223 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196648 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196660 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196680 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196693 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.199291 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.214112 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.215639 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.218970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.219093 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.219109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.219133 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.219145 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.236749 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.237572 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.243917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.244019 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.244030 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.244051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.244072 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.347680 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.356009 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367194 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367239 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367286 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.374586 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.389443 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.389609 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.392941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.392983 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.392996 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.393013 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.393026 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.402883 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.417771 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.435450 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.466393 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.486787 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495924 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495965 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.509278 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.529651 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.550373 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.568918 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.587259 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599808 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599823 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599866 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.607781 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.704942 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.705087 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.705109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.705139 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.705198 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.910911 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.910970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.911011 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.911120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.911140 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015922 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015940 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.081488 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:50:34.273287012 +0000 UTC Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119738 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119816 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119853 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.170786 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.170931 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.170948 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:52 crc kubenswrapper[4656]: E0128 15:19:52.171109 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:52 crc kubenswrapper[4656]: E0128 15:19:52.171237 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:52 crc kubenswrapper[4656]: E0128 15:19:52.171359 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223158 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223236 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223246 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223587 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223720 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327513 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327583 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327621 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327634 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431545 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431619 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431644 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431661 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534207 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534320 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534335 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637714 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637766 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637782 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637793 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741199 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741244 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741256 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741276 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741287 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867584 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867595 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867628 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970806 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970856 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970875 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074649 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074660 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074679 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074691 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.082120 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:16:08.430420191 +0000 UTC Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.170495 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:53 crc kubenswrapper[4656]: E0128 15:19:53.170737 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181455 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181528 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181550 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181564 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.285659 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.286254 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.286355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.286444 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.286513 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391221 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391337 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.494985 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.495027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.495040 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.495062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.495077 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599381 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599393 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599430 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702270 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702320 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702329 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702346 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702356 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.804743 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.805117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.805234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.805386 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.805596 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908719 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908807 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908817 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011678 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011710 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011720 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.083803 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:03:09.586602176 +0000 UTC Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.115901 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.115951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.115965 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.115989 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.116000 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.170275 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.170348 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.170318 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:54 crc kubenswrapper[4656]: E0128 15:19:54.170507 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:54 crc kubenswrapper[4656]: E0128 15:19:54.170662 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:54 crc kubenswrapper[4656]: E0128 15:19:54.170766 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.219234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.219688 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.220279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.220567 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.220664 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.323604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.324077 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.324245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.324429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.324591 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427805 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427852 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427898 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530760 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530846 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530877 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633199 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633233 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633243 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633259 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633271 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736134 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838304 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838339 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838379 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941072 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941115 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941124 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941141 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941181 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043678 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043716 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043729 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043770 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.084707 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:23:53.435213973 +0000 UTC Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.146969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.147010 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.147021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.147039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.147052 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.169702 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:55 crc kubenswrapper[4656]: E0128 15:19:55.169943 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250749 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250884 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250930 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.355862 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.355946 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.355970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.355993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.356028 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460769 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460789 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460802 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564454 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564534 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564548 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564572 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564588 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667886 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667920 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770752 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770765 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770801 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874110 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874130 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874189 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874209 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.982492 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.983853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.983870 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.983894 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.983914 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.084833 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:35:39.822284073 +0000 UTC Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088950 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088963 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088983 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088994 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.170365 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.170451 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.170501 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:56 crc kubenswrapper[4656]: E0128 15:19:56.170609 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:56 crc kubenswrapper[4656]: E0128 15:19:56.170926 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:56 crc kubenswrapper[4656]: E0128 15:19:56.171094 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193362 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.327993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.328044 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.328056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.328076 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.328088 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.431850 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.431960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.431977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.432004 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.432035 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.535919 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.536025 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.536040 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.536063 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.536082 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639506 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639554 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639583 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639595 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743883 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743908 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743926 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848210 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848277 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848292 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848313 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848330 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.951657 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.952142 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.952277 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.952391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.952497 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056477 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056540 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056555 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056577 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056589 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.085039 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:35:14.068399595 +0000 UTC Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159850 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159886 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159898 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.170330 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:57 crc kubenswrapper[4656]: E0128 15:19:57.170548 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.266289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.266786 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.266904 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.267026 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.267137 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371072 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371632 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371717 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371833 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371906 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475219 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475257 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475302 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579812 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579825 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579868 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683742 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683813 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683864 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.787932 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.787990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.788017 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.788039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.788052 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892418 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892439 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892452 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996325 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996337 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996371 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.085635 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:01:20.299056371 +0000 UTC Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.099992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.100035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.100049 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.100071 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.100086 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.170053 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.170053 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:58 crc kubenswrapper[4656]: E0128 15:19:58.170331 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.170087 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:58 crc kubenswrapper[4656]: E0128 15:19:58.170498 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:58 crc kubenswrapper[4656]: E0128 15:19:58.170603 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204001 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204090 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204104 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307060 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307124 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307139 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307183 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307193 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.409916 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.409973 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.409987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.410019 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.410036 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513476 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513521 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618407 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618441 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722886 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722937 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722972 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722984 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825860 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825919 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825939 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825952 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929688 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929755 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929769 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929804 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033007 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033071 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033085 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033129 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.086278 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:15:26.103876332 +0000 UTC Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137438 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137502 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.170279 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:59 crc kubenswrapper[4656]: E0128 15:19:59.170506 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241256 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241300 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344423 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344476 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344487 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344523 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448805 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448841 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448861 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552018 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552675 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552693 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552717 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552730 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656668 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656728 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656748 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656760 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761077 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761888 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.797471 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:59 crc kubenswrapper[4656]: E0128 15:19:59.797949 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:59 crc kubenswrapper[4656]: E0128 15:19:59.798124 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:31.798080905 +0000 UTC m=+122.306251879 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866775 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866846 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866864 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866890 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866905 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971263 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971322 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075197 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075326 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075360 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.086798 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:10:01.869164617 +0000 UTC Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.170789 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.170873 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.170900 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:00 crc kubenswrapper[4656]: E0128 15:20:00.171029 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:00 crc kubenswrapper[4656]: E0128 15:20:00.171522 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:00 crc kubenswrapper[4656]: E0128 15:20:00.171768 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179335 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179387 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179398 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179420 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179433 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284273 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284304 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284316 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.387881 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.387970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.387992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.388016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.388031 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492067 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492128 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492149 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492190 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597306 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597351 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597365 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701036 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701099 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701155 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804710 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804792 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804815 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804835 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.908913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.909551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.909571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.909600 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.909613 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013389 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013463 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013478 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013502 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013522 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.086981 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:10:28.659244759 +0000 UTC Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.116863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.117262 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.117356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.117472 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.117554 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.169825 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.170479 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.193744 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.338826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.338893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.338908 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.338934 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.339010 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.354115 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.375272 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.404413 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.429182 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443494 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443582 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443595 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443618 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443633 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.451548 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.473702 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.502667 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.524339 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.544215 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548545 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548678 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548693 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548736 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.564822 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.585854 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.604340 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.623497 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.638518 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647563 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647573 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.660491 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.662182 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666831 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666844 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666870 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666884 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.680136 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.683233 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690047 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690087 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690103 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.706497 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712623 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712746 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712795 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.728971 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733866 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733937 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733950 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733972 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733987 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.750042 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.750264 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.752930 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.752982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.752992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.753014 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.753029 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.856968 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.857539 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.857635 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.857747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.857829 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961620 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961659 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064653 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064673 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064717 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064731 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.087939 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:52:55.690988503 +0000 UTC Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168502 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168554 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168591 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.170011 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.170106 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:02 crc kubenswrapper[4656]: E0128 15:20:02.170251 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:02 crc kubenswrapper[4656]: E0128 15:20:02.170351 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.170460 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:02 crc kubenswrapper[4656]: E0128 15:20:02.177440 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272657 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272694 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272711 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375614 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375674 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375717 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478726 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478777 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478817 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478832 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582797 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582859 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582878 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582898 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582912 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686420 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686437 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686456 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686467 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790124 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790138 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790180 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790198 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.894392 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.894971 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.894990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.895012 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.895027 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997896 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997926 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997939 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.088717 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:58:14.826051139 +0000 UTC Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102275 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102362 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102390 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102405 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.170362 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:03 crc kubenswrapper[4656]: E0128 15:20:03.170573 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206353 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206402 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206437 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206450 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310318 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413639 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413663 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413677 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517626 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517639 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517660 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517675 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621353 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621428 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621439 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.725984 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.726035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.726049 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.726074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.726089 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830262 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830291 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830302 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933460 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933513 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933540 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.037697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.038220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.038385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.038495 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.038512 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.089633 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:20:13.367807964 +0000 UTC Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.141989 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.142061 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.142075 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.142101 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.142119 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.169699 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.169720 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.169777 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:04 crc kubenswrapper[4656]: E0128 15:20:04.170298 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:04 crc kubenswrapper[4656]: E0128 15:20:04.170458 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:04 crc kubenswrapper[4656]: E0128 15:20:04.170637 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245469 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245507 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245519 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348579 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348596 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348607 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452191 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452251 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452279 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555362 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555391 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659725 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659743 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659787 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763637 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763652 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763674 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763690 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867229 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867241 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.971932 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.971999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.972009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.972031 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.972043 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075464 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075500 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.090582 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:31:25.3678941 +0000 UTC Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.170647 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:05 crc kubenswrapper[4656]: E0128 15:20:05.171016 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.174030 4656 scope.go:117] "RemoveContainer" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" Jan 28 15:20:05 crc kubenswrapper[4656]: E0128 15:20:05.174851 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.181767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.181841 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.181863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.181900 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.310072 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.328939 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.345426 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.360416 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.375247 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.395756 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.416845 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.416921 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.416933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.417005 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.417021 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.421005 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.439756 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.460270 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.477007 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.496041 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.512933 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520153 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520228 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520243 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520264 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520281 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.536721 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.562922 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.610038 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623134 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623190 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623201 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623227 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.645729 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.677844 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.695491 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.726953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.727031 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.727049 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.727075 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.727092 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831304 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831316 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935168 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935181 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935201 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935217 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038129 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038246 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038258 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.091625 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:28:28.10227335 +0000 UTC Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141462 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141514 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141524 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141554 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.169893 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.170005 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:06 crc kubenswrapper[4656]: E0128 15:20:06.170123 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.170185 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:06 crc kubenswrapper[4656]: E0128 15:20:06.170292 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:06 crc kubenswrapper[4656]: E0128 15:20:06.170405 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245570 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245594 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.323754 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/0.log" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.323843 4656 generic.go:334] "Generic (PLEG): container finished" podID="7662a84d-d9cb-4684-b76f-c63ffeff8344" containerID="469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434" exitCode=1 Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.323888 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerDied","Data":"469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.324476 4656 scope.go:117] "RemoveContainer" containerID="469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.342214 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348837 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348946 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348961 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.369577 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.388927 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.417765 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.433655 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.451945 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.451996 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.452009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.452032 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.452055 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.457367 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.477878 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.497488 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.516705 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.537060 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554831 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554841 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554876 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.556404 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.573110 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.595329 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.618601 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.635830 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658071 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658089 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658101 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.659659 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.674250 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761876 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761888 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761919 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865842 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865884 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865931 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969813 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969823 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969848 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969861 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073520 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073555 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073575 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.091840 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:49:11.225859024 +0000 UTC Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.171924 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:07 crc kubenswrapper[4656]: E0128 15:20:07.172060 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176557 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176591 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176622 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176635 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280646 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280731 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280768 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.330024 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/0.log" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.330099 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.346573 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.359121 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.373205 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383461 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383493 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383522 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383533 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.390191 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.401675 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.421794 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.444598 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.472057 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486580 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486612 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486620 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486636 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486646 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.488182 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.500819 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.514963 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.527662 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.539941 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.551616 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.564865 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.581920 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.588986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.589041 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.589053 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.589070 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.589453 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.595479 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692699 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692745 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692766 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692780 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.803938 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.804356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.804748 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.804924 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.805149 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.908268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.908762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.908865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.908976 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.909077 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012030 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012445 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012658 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012726 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.092125 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:18:57.099141849 +0000 UTC Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115690 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115770 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.170510 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.170602 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:08 crc kubenswrapper[4656]: E0128 15:20:08.170630 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:08 crc kubenswrapper[4656]: E0128 15:20:08.170665 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.170525 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:08 crc kubenswrapper[4656]: E0128 15:20:08.171055 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218567 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218607 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218617 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218631 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218640 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.321519 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.321925 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.322101 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.322325 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.322471 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425058 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425098 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425139 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527559 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527598 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527621 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527630 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629901 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629952 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629964 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629983 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629996 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732802 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732847 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836548 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836614 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938409 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938459 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938469 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938494 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040737 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040776 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040804 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040814 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.093025 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:53:13.049360163 +0000 UTC Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143350 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143365 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143397 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.170079 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:09 crc kubenswrapper[4656]: E0128 15:20:09.170317 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246528 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246541 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246556 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246567 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348457 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348473 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451200 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451229 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451240 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553854 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553923 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553949 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553969 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656900 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656942 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656952 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656983 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.759896 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.759969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.759995 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.760033 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.760058 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862470 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862511 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862520 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862534 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862543 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966047 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966394 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966467 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966546 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069549 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069727 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.093600 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:40:51.728602886 +0000 UTC Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.170094 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.170333 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:10 crc kubenswrapper[4656]: E0128 15:20:10.170415 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:10 crc kubenswrapper[4656]: E0128 15:20:10.170326 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.170474 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:10 crc kubenswrapper[4656]: E0128 15:20:10.170525 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172793 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172852 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172864 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275576 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275592 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275634 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383819 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383831 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383850 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383863 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486230 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486273 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486283 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486310 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588330 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588403 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588458 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588478 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690353 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690394 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690412 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690448 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793182 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793266 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896363 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896432 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896443 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999123 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999434 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999529 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999685 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.094692 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:45:38.141674801 +0000 UTC Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103022 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103066 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103130 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103141 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.169818 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.170149 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.190349 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.194301 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205291 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205329 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205339 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205370 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.211229 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.223743 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.240039 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.262699 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.278323 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.294640 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308045 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308097 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308129 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308142 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308774 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.323440 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.333472 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.346403 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.358670 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.368689 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.385623 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.399153 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411418 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411500 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411542 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.412155 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.423687 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513817 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513867 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513876 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513891 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513899 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616066 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616106 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616115 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616133 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616207 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718902 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718913 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821680 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821782 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821795 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852424 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852489 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852501 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852522 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852532 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.867047 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871384 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871396 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.886140 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891090 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891128 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891141 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891150 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.902525 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906326 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906371 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906397 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.918588 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922766 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922809 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922832 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.941185 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.941314 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943506 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943516 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943542 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046083 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046122 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046134 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046153 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046187 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.095941 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:07:09.758961828 +0000 UTC Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148378 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148387 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148417 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.169731 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.169800 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:12 crc kubenswrapper[4656]: E0128 15:20:12.169910 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.169956 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:12 crc kubenswrapper[4656]: E0128 15:20:12.170049 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:12 crc kubenswrapper[4656]: E0128 15:20:12.170200 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250858 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250867 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250884 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250893 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353403 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353465 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353484 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353499 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460362 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460458 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460989 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563153 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563200 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563238 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563251 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666226 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666270 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666280 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666304 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768482 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768546 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768560 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768599 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768612 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872198 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872328 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975303 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.078844 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.078918 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.078937 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.079024 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.079043 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.096398 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:00:40.837581498 +0000 UTC Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.172556 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:13 crc kubenswrapper[4656]: E0128 15:20:13.172957 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182061 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182397 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182601 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182670 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.284942 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.285240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.285340 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.285413 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.285606 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388261 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388316 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388336 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388349 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490706 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490747 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592856 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592937 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592945 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592962 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592971 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695574 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695592 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695631 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798622 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798661 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798678 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901190 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005383 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005460 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005512 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005532 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.096636 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:58:54.47097829 +0000 UTC Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108003 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108096 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108137 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108149 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.170469 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.170560 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.170974 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.171228 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.171572 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.171655 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.210941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.210978 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.210991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.211008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.211021 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314318 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314344 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314375 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314398 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416536 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416634 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520680 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520760 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520789 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623185 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726670 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726723 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726767 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829302 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829321 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829335 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895435 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895589 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895615 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.895591046 +0000 UTC m=+169.403761850 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895657 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895685 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895706 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895713 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895757 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.895746101 +0000 UTC m=+169.403916915 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895811 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895828 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895844 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895854 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895870 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.895863254 +0000 UTC m=+169.404034058 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895875 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895888 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895931 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.895919316 +0000 UTC m=+169.404090130 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.896060 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.896256 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.896223114 +0000 UTC m=+169.404393918 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931737 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931779 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035242 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035364 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035381 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.096968 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:58:16.387273722 +0000 UTC Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138138 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138148 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.169923 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:15 crc kubenswrapper[4656]: E0128 15:20:15.170130 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240492 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240528 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240539 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343148 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343255 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343269 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445456 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445495 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445517 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445527 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549344 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549391 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652231 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652328 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755696 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755742 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755771 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858186 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858227 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858238 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858255 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858265 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961587 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961661 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961677 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961702 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961746 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064656 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064672 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064683 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.097155 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:36:25.847152605 +0000 UTC Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167928 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167950 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167996 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.169977 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.169982 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.170013 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:16 crc kubenswrapper[4656]: E0128 15:20:16.170278 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:16 crc kubenswrapper[4656]: E0128 15:20:16.170323 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:16 crc kubenswrapper[4656]: E0128 15:20:16.170373 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.270791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.271115 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.271333 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.271510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.271624 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374091 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374132 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477122 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477135 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477153 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477186 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579873 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579885 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682642 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682651 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682674 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.786948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.786991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.787001 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.787016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.787026 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889182 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889229 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889259 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993259 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993293 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993305 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096079 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096125 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096136 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096186 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.098225 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:57:24.422152951 +0000 UTC Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.169716 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:17 crc kubenswrapper[4656]: E0128 15:20:17.169876 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199922 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199935 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199972 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305572 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305618 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305632 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305642 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410237 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410304 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513841 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513903 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513926 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513980 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617688 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617714 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617742 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721148 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721209 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721243 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721268 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825465 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825535 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825559 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825618 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936017 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936072 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936133 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038529 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038579 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038622 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.099200 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:14:17.79040175 +0000 UTC Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142227 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142343 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142364 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.170228 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.170327 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:18 crc kubenswrapper[4656]: E0128 15:20:18.170416 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.170356 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:18 crc kubenswrapper[4656]: E0128 15:20:18.170606 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:18 crc kubenswrapper[4656]: E0128 15:20:18.170841 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245351 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245390 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245406 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348691 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348805 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348894 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348904 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.467505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.467837 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.467934 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.468069 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.468193 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.571998 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.572062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.572082 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.572109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.572128 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.674568 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.674872 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.674966 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.675058 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.675139 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.779382 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.780671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.780893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.781095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.781393 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906653 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906702 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906745 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009436 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009444 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009458 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009467 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.100093 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:41:06.603566603 +0000 UTC Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113111 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113201 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.170555 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:19 crc kubenswrapper[4656]: E0128 15:20:19.170735 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.216686 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.217077 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.217274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.217452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.217680 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320135 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320254 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320307 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320325 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423642 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423702 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423726 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423757 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526430 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526473 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526670 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526698 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628737 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628752 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628769 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628781 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731588 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731639 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731652 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731684 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834516 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834558 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834578 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834617 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938217 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938255 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938319 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.040933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.040992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.041010 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.041034 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.041048 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.100636 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:09:43.052797393 +0000 UTC Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.148951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.148999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.149015 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.149044 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.149063 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.170479 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.170700 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:20 crc kubenswrapper[4656]: E0128 15:20:20.170703 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.170797 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:20 crc kubenswrapper[4656]: E0128 15:20:20.170941 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:20 crc kubenswrapper[4656]: E0128 15:20:20.171835 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.172334 4656 scope.go:117] "RemoveContainer" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251570 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251610 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251621 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251635 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251645 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354183 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456683 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456728 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456741 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456760 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456775 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.558905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.558990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.559009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.559034 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.559047 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661857 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661902 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661934 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661948 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764490 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764541 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764557 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764566 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866585 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866594 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968918 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968963 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968981 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968993 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071332 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071362 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.101094 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:10:07.758839126 +0000 UTC Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.169834 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:21 crc kubenswrapper[4656]: E0128 15:20:21.170002 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173390 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173419 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173427 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173441 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173460 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.186337 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.200476 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.222391 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.237010 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.250485 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.263905 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276724 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276759 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.285808 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.300311 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.314042 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.327582 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.341564 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.352710 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.363430 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.375197 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378881 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378915 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378938 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378969 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.389264 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.389927 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/2.log" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.392742 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" exitCode=1 Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.392783 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.392839 4656 scope.go:117] "RemoveContainer" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.395930 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:20:21 crc kubenswrapper[4656]: E0128 15:20:21.396272 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.401837 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59b72fa-6f07-4658-b277-0b10b8bf83a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f7605b956da7648bb4ea64104ebddadf45a4297723b28d1813ec330122f9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4f0a931b81775cf3bcadec1c2d278079e6d6c08334a5d412f957ea057000a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33daf9c576813489c0d122bf8b57511d33c442f5c4f81c8a1ba17b349d04d4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be91e89edee3d65aa4855a9e6e4354e182726e95ba57165fbebc4e1b334a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25512a305427a2fbc0dc915cc3dfb21cadd3db472a2764f1b5a686d60ec422e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.417798 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.434234 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.445319 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.455204 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.468590 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481455 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481594 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.507710 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59b72fa-6f07-4658-b277-0b10b8bf83a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f7605b956da7648bb4ea64104ebddadf45a4297723b28d1813ec330122f9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4f0a931b81775cf3bcadec1c2d278079e6d6c08334a5d412f957ea057000a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33daf9c576813489c0d122bf8b57511d33c442f5c4f81c8a1ba17b349d04d4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be91e89edee3d65aa4855a9e6e4354e182726e95ba57165fbebc4e1b334a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25512a305427a2fbc0dc915cc3dfb21cadd3db472a2764f1b5a686d60ec422e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.522915 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.534770 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.549846 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.562022 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.576684 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584579 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584625 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584657 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584670 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.595858 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:21Z\\\",\\\"message\\\":\\\"73 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:20:21.222738 6673 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.610771 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.623067 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.633393 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.646101 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.656853 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.667025 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.679106 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686669 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686706 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686716 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686747 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.693030 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788143 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891018 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891092 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891103 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993804 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993866 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993897 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.118871 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:49:08.362358 +0000 UTC Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.120960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.120999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.121009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.121036 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.121055 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.170518 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.170581 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.170681 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.170775 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.170830 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.170882 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224370 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224449 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224463 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316262 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316298 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.329730 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333800 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333819 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333828 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.347757 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353523 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353584 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353619 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.366857 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.371953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.371987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.372000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.372015 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.372027 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.387268 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391755 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391778 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391802 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391859 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.399506 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.411099 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.411290 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412817 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412825 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412840 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412849 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516033 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516107 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516184 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665135 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665248 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665268 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768337 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768382 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768399 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871387 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871419 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973338 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973389 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973422 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076057 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076092 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076102 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076119 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076134 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.119261 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 14:33:20.452983085 +0000 UTC Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.170263 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:23 crc kubenswrapper[4656]: E0128 15:20:23.170552 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178151 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178318 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281600 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281683 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281739 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281757 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385517 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385601 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385629 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385648 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488729 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488755 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488775 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488784 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591654 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591666 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591685 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591696 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699466 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699503 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803280 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803291 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803320 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.905922 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.905969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.905978 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.905994 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.906004 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.008959 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.009000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.009008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.009023 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.009033 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112463 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112527 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112549 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112562 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.119706 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:57:40.509076986 +0000 UTC Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.170383 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.170409 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.170479 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:24 crc kubenswrapper[4656]: E0128 15:20:24.170551 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:24 crc kubenswrapper[4656]: E0128 15:20:24.170602 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:24 crc kubenswrapper[4656]: E0128 15:20:24.170650 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214633 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214668 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214676 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214691 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214701 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317048 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317076 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317093 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.419951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.420004 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.420020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.420055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.420072 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522833 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522862 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.625939 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.625982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.626021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.626050 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.626065 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728580 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728591 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728614 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728626 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.830986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.831054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.831069 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.831095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.831109 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933447 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933493 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933519 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036861 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036883 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036911 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036928 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.120765 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:37:14.924377772 +0000 UTC Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139795 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139827 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139839 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139857 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139870 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.169683 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:25 crc kubenswrapper[4656]: E0128 15:20:25.169808 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243331 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243369 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243384 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345785 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345842 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345854 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345871 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345884 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448787 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448810 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.551959 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.552000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.552009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.552026 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.552038 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655408 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655425 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655438 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759504 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759573 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759589 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759616 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759633 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862861 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862906 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862924 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966469 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966540 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966562 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966619 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.068981 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.069257 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.069274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.069295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.069307 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.121317 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:10:15.735468271 +0000 UTC Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.170019 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.170073 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.170098 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:26 crc kubenswrapper[4656]: E0128 15:20:26.170140 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:26 crc kubenswrapper[4656]: E0128 15:20:26.170205 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:26 crc kubenswrapper[4656]: E0128 15:20:26.170755 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171883 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171894 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275124 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275149 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275214 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.377898 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.377941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.377952 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.378627 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.378664 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482344 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482390 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482399 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482414 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482423 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585329 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585342 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585360 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585372 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688504 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688520 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688544 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688564 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790839 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790923 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790933 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.894917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.894986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.895003 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.895027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.895059 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057425 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057438 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057458 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057469 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.121648 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:59:03.340220982 +0000 UTC Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160434 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160448 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160467 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160480 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.169761 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:27 crc kubenswrapper[4656]: E0128 15:20:27.169923 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262729 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262777 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262793 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262806 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365420 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365469 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365478 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365494 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365505 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468542 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468599 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468616 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468626 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572014 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572073 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572092 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572104 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674586 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674717 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674736 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674784 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777024 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777079 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777089 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777119 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879244 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879293 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879326 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879336 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982266 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982328 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982343 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982366 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982380 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085363 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085401 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085411 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085431 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085442 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.121865 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:13:20.134459222 +0000 UTC Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.170041 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.170122 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.170284 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:28 crc kubenswrapper[4656]: E0128 15:20:28.170270 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:28 crc kubenswrapper[4656]: E0128 15:20:28.170434 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:28 crc kubenswrapper[4656]: E0128 15:20:28.170529 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.185186 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224714 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224761 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224790 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224804 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.326964 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.327007 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.327020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.327039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.327052 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429237 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429261 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533325 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533398 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533427 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533445 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636365 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636424 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636434 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739155 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739184 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739236 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739248 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841557 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841595 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841622 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841642 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944764 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944786 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944798 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047150 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047205 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047214 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047228 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047237 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.122829 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:24:50.833541604 +0000 UTC Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150200 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150297 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150308 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.170369 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:29 crc kubenswrapper[4656]: E0128 15:20:29.170889 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252739 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355465 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355522 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355584 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355596 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457531 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457602 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560634 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560662 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560672 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560697 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662878 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662888 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662904 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662918 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765670 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765707 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765735 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765746 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868999 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980568 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980625 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980641 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980661 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980675 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083765 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083829 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083847 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083859 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.123255 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:23:34.377513291 +0000 UTC Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.169592 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.169604 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.169781 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:30 crc kubenswrapper[4656]: E0128 15:20:30.170133 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:30 crc kubenswrapper[4656]: E0128 15:20:30.170255 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:30 crc kubenswrapper[4656]: E0128 15:20:30.170338 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186215 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186243 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186277 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289131 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289184 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391612 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391697 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.493962 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.494023 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.494041 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.494065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.494085 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596392 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596447 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596459 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596478 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596491 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699187 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699203 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802407 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802414 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802452 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.904995 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.905054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.905066 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.905088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.905100 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007876 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007917 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:31Z","lastTransitionTime":"2026-01-28T15:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:31 crc kubenswrapper[4656]: E0128 15:20:31.108777 4656 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.124099 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:51:34.234253786 +0000 UTC Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.169825 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:31 crc kubenswrapper[4656]: E0128 15:20:31.169968 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.195396 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.210546 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.224867 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.239357 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.258719 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.285618 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:21Z\\\",\\\"message\\\":\\\"73 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:20:21.222738 6673 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.301366 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.316277 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.330459 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.345925 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.359261 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.372065 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.397668 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59b72fa-6f07-4658-b277-0b10b8bf83a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f7605b956da7648bb4ea64104ebddadf45a4297723b28d1813ec330122f9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4f0a931b81775cf3bcadec1c2d278079e6d6c08334a5d412f957ea057000a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33daf9c576813489c0d122bf8b57511d33c442f5c4f81c8a1ba17b349d04d4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be91e89edee3d65aa4855a9e6e4354e182726e95ba57165fbebc4e1b334a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25512a305427a2fbc0dc915cc3dfb21cadd3db472a2764f1b5a686d60ec422e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.409590 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.424945 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.439235 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.452204 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d4beba1-dc60-4190-925e-bd0c0d6deee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cadf761b9301aaeea19fad51cfac7b4aa80f49ae5e0fadf4eababc2c5bb945b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://826db300be47e8ade08ecd18880a53f4ce70b3b8f4ffbcd327fec2f952b0168d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5edae8f60ea42bc0f7cee0c415afdb634b13222f6a9b1bbac9e15d6b3ec3867\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.463725 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.473358 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.886596 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:31 crc kubenswrapper[4656]: E0128 15:20:31.886917 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:20:31 crc kubenswrapper[4656]: E0128 15:20:31.887030 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:35.887002634 +0000 UTC m=+186.395173448 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.046003 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.125006 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:09:13.782569124 +0000 UTC Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.170598 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.170634 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.170792 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.170916 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.171285 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.171367 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684155 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684211 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684221 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684309 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.702355 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706046 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706091 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706107 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706119 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.722328 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726759 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726836 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.743361 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748353 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748428 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748448 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748460 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.766528 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770922 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770975 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770985 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.788410 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.788549 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.125527 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:05:17.224231566 +0000 UTC Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.170555 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:33 crc kubenswrapper[4656]: E0128 15:20:33.171250 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.171678 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:20:33 crc kubenswrapper[4656]: E0128 15:20:33.172014 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.186264 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.208102 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.222511 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.246848 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59b72fa-6f07-4658-b277-0b10b8bf83a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f7605b956da7648bb4ea64104ebddadf45a4297723b28d1813ec330122f9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4f0a931b81775cf3bcadec1c2d278079e6d6c08334a5d412f957ea057000a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33daf9c576813489c0d122bf8b57511d33c442f5c4f81c8a1ba17b349d04d4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be91e89edee3d65aa4855a9e6e4354e182726e95ba57165fbebc4e1b334a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25512a305427a2fbc0dc915cc3dfb21cadd3db472a2764f1b5a686d60ec422e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.265721 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.277201 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.291448 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d4beba1-dc60-4190-925e-bd0c0d6deee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cadf761b9301aaeea19fad51cfac7b4aa80f49ae5e0fadf4eababc2c5bb945b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://826db300be47e8ade08ecd18880a53f4ce70b3b8f4ffbcd327fec2f952b0168d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5edae8f60ea42bc0f7cee0c415afdb634b13222f6a9b1bbac9e15d6b3ec3867\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.308182 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.321902 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.338229 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.360795 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:21Z\\\",\\\"message\\\":\\\"73 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:20:21.222738 6673 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:20:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.388001 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.407307 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.421794 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.439607 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.452264 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.470193 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.489047 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.506849 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:34 crc kubenswrapper[4656]: I0128 15:20:34.125809 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:19:40.466056946 +0000 UTC Jan 28 15:20:34 crc kubenswrapper[4656]: I0128 15:20:34.170513 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:34 crc kubenswrapper[4656]: I0128 15:20:34.170640 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:34 crc kubenswrapper[4656]: I0128 15:20:34.170689 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:34 crc kubenswrapper[4656]: E0128 15:20:34.171786 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:34 crc kubenswrapper[4656]: E0128 15:20:34.171969 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:34 crc kubenswrapper[4656]: E0128 15:20:34.172195 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:35 crc kubenswrapper[4656]: I0128 15:20:35.126911 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:34:44.061147139 +0000 UTC Jan 28 15:20:35 crc kubenswrapper[4656]: I0128 15:20:35.169676 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:35 crc kubenswrapper[4656]: E0128 15:20:35.170061 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:36 crc kubenswrapper[4656]: I0128 15:20:36.127799 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:48:34.638280197 +0000 UTC Jan 28 15:20:36 crc kubenswrapper[4656]: I0128 15:20:36.173236 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:36 crc kubenswrapper[4656]: E0128 15:20:36.173558 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:36 crc kubenswrapper[4656]: I0128 15:20:36.174250 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:36 crc kubenswrapper[4656]: E0128 15:20:36.174316 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:36 crc kubenswrapper[4656]: I0128 15:20:36.174451 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:36 crc kubenswrapper[4656]: E0128 15:20:36.174514 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:37 crc kubenswrapper[4656]: E0128 15:20:37.047362 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:37 crc kubenswrapper[4656]: I0128 15:20:37.128124 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:11:58.117489854 +0000 UTC Jan 28 15:20:37 crc kubenswrapper[4656]: I0128 15:20:37.170659 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:37 crc kubenswrapper[4656]: E0128 15:20:37.170968 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:38 crc kubenswrapper[4656]: I0128 15:20:38.129089 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:02:43.085227007 +0000 UTC Jan 28 15:20:38 crc kubenswrapper[4656]: I0128 15:20:38.170450 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:38 crc kubenswrapper[4656]: I0128 15:20:38.170516 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:38 crc kubenswrapper[4656]: I0128 15:20:38.170453 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:38 crc kubenswrapper[4656]: E0128 15:20:38.170668 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:38 crc kubenswrapper[4656]: E0128 15:20:38.170812 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:38 crc kubenswrapper[4656]: E0128 15:20:38.170964 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:39 crc kubenswrapper[4656]: I0128 15:20:39.129501 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:48:11.645608866 +0000 UTC Jan 28 15:20:39 crc kubenswrapper[4656]: I0128 15:20:39.170811 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:39 crc kubenswrapper[4656]: E0128 15:20:39.171385 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:40 crc kubenswrapper[4656]: I0128 15:20:40.130431 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:30:17.616396151 +0000 UTC Jan 28 15:20:40 crc kubenswrapper[4656]: I0128 15:20:40.170550 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:40 crc kubenswrapper[4656]: E0128 15:20:40.170680 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:40 crc kubenswrapper[4656]: I0128 15:20:40.170569 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:40 crc kubenswrapper[4656]: E0128 15:20:40.170775 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:40 crc kubenswrapper[4656]: I0128 15:20:40.170550 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:40 crc kubenswrapper[4656]: E0128 15:20:40.170951 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.131195 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:16:06.657941844 +0000 UTC Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.169865 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:41 crc kubenswrapper[4656]: E0128 15:20:41.169989 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.212967 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=13.212945189 podStartE2EDuration="13.212945189s" podCreationTimestamp="2026-01-28 15:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.212646561 +0000 UTC m=+131.720817365" watchObservedRunningTime="2026-01-28 15:20:41.212945189 +0000 UTC m=+131.721115993" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.268494 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.269597 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:20:41 crc kubenswrapper[4656]: E0128 15:20:41.269790 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.320092 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-c695w" podStartSLOduration=96.32007015 podStartE2EDuration="1m36.32007015s" podCreationTimestamp="2026-01-28 15:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.318912387 +0000 UTC m=+131.827083181" watchObservedRunningTime="2026-01-28 15:20:41.32007015 +0000 UTC m=+131.828240974" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.341787 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rpzjg" podStartSLOduration=95.341766202 podStartE2EDuration="1m35.341766202s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.341518255 +0000 UTC m=+131.849689059" watchObservedRunningTime="2026-01-28 15:20:41.341766202 +0000 UTC m=+131.849937016" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.482193 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" podStartSLOduration=94.482138741 podStartE2EDuration="1m34.482138741s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.458145642 +0000 UTC m=+131.966316446" watchObservedRunningTime="2026-01-28 15:20:41.482138741 +0000 UTC m=+131.990309545" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.482487 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.482481771 podStartE2EDuration="1m21.482481771s" podCreationTimestamp="2026-01-28 15:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.480731151 +0000 UTC m=+131.988901965" watchObservedRunningTime="2026-01-28 15:20:41.482481771 +0000 UTC m=+131.990652575" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.500104 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=73.500082267 podStartE2EDuration="1m13.500082267s" podCreationTimestamp="2026-01-28 15:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.499124299 +0000 UTC m=+132.007295133" watchObservedRunningTime="2026-01-28 15:20:41.500082267 +0000 UTC m=+132.008253071" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.541917 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-55xm4" podStartSLOduration=96.541892319 podStartE2EDuration="1m36.541892319s" podCreationTimestamp="2026-01-28 15:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.527821594 +0000 UTC m=+132.035992398" watchObservedRunningTime="2026-01-28 15:20:41.541892319 +0000 UTC m=+132.050063123" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.542423 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podStartSLOduration=95.542416554 podStartE2EDuration="1m35.542416554s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.541334793 +0000 UTC m=+132.049505607" watchObservedRunningTime="2026-01-28 15:20:41.542416554 +0000 UTC m=+132.050587368" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.587774 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=30.587748867 podStartE2EDuration="30.587748867s" podCreationTimestamp="2026-01-28 15:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.586304925 +0000 UTC m=+132.094475729" watchObservedRunningTime="2026-01-28 15:20:41.587748867 +0000 UTC m=+132.095919671" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.628994 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=50.628970511 podStartE2EDuration="50.628970511s" podCreationTimestamp="2026-01-28 15:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.605714703 +0000 UTC m=+132.113885507" watchObservedRunningTime="2026-01-28 15:20:41.628970511 +0000 UTC m=+132.137141315" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.645932 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-854tp" podStartSLOduration=95.645905048 podStartE2EDuration="1m35.645905048s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.629429715 +0000 UTC m=+132.137600519" watchObservedRunningTime="2026-01-28 15:20:41.645905048 +0000 UTC m=+132.154075852" Jan 28 15:20:42 crc kubenswrapper[4656]: E0128 15:20:42.058675 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:42 crc kubenswrapper[4656]: I0128 15:20:42.132041 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:19:51.925733387 +0000 UTC Jan 28 15:20:42 crc kubenswrapper[4656]: I0128 15:20:42.170756 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:42 crc kubenswrapper[4656]: I0128 15:20:42.170900 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:42 crc kubenswrapper[4656]: I0128 15:20:42.170902 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:42 crc kubenswrapper[4656]: E0128 15:20:42.171040 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:42 crc kubenswrapper[4656]: E0128 15:20:42.171119 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:42 crc kubenswrapper[4656]: E0128 15:20:42.171267 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.010311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.011801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.011914 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.012008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.012190 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:43Z","lastTransitionTime":"2026-01-28T15:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.080965 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz"] Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.081955 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.088981 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.089186 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.088982 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.088988 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.131670 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.131977 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132100 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132209 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132315 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132355 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:56:34.764066791 +0000 UTC Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132407 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.142643 4656 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.170306 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:43 crc kubenswrapper[4656]: E0128 15:20:43.170746 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.233756 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.234090 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.234318 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.233997 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.234442 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.234493 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.235041 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.235235 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.246371 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.266245 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.401822 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.501859 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" event={"ID":"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3","Type":"ContainerStarted","Data":"505c9039575e525ff6948f41b5e315fdcf04c83ae9620408f603397bf19d9ef6"} Jan 28 15:20:44 crc kubenswrapper[4656]: I0128 15:20:44.170384 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:44 crc kubenswrapper[4656]: I0128 15:20:44.170423 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:44 crc kubenswrapper[4656]: I0128 15:20:44.170513 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:44 crc kubenswrapper[4656]: E0128 15:20:44.171424 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:44 crc kubenswrapper[4656]: E0128 15:20:44.171508 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:44 crc kubenswrapper[4656]: E0128 15:20:44.171591 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:44 crc kubenswrapper[4656]: I0128 15:20:44.511877 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" event={"ID":"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3","Type":"ContainerStarted","Data":"318c5b9d2f9f708ec416642b521a622934f3ca6935742ea695ff42fcd5a62316"} Jan 28 15:20:45 crc kubenswrapper[4656]: I0128 15:20:45.169808 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:45 crc kubenswrapper[4656]: E0128 15:20:45.169946 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:46 crc kubenswrapper[4656]: I0128 15:20:46.169879 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:46 crc kubenswrapper[4656]: I0128 15:20:46.169981 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:46 crc kubenswrapper[4656]: I0128 15:20:46.170105 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:46 crc kubenswrapper[4656]: E0128 15:20:46.170367 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:46 crc kubenswrapper[4656]: E0128 15:20:46.170444 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:46 crc kubenswrapper[4656]: E0128 15:20:46.170279 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:47 crc kubenswrapper[4656]: E0128 15:20:47.059917 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:47 crc kubenswrapper[4656]: I0128 15:20:47.170619 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:47 crc kubenswrapper[4656]: E0128 15:20:47.170825 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:48 crc kubenswrapper[4656]: I0128 15:20:48.170309 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:48 crc kubenswrapper[4656]: E0128 15:20:48.170707 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:48 crc kubenswrapper[4656]: I0128 15:20:48.170394 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:48 crc kubenswrapper[4656]: I0128 15:20:48.170327 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:48 crc kubenswrapper[4656]: E0128 15:20:48.171287 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:48 crc kubenswrapper[4656]: E0128 15:20:48.171440 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:49 crc kubenswrapper[4656]: I0128 15:20:49.169902 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:49 crc kubenswrapper[4656]: E0128 15:20:49.170456 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:50 crc kubenswrapper[4656]: I0128 15:20:50.169593 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:50 crc kubenswrapper[4656]: I0128 15:20:50.169655 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:50 crc kubenswrapper[4656]: E0128 15:20:50.169767 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:50 crc kubenswrapper[4656]: E0128 15:20:50.169849 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:50 crc kubenswrapper[4656]: I0128 15:20:50.169621 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:50 crc kubenswrapper[4656]: E0128 15:20:50.170341 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:51 crc kubenswrapper[4656]: I0128 15:20:51.171400 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:51 crc kubenswrapper[4656]: E0128 15:20:51.173350 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.061591 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.169818 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.169861 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.169929 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.170030 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.170370 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.170497 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.539813 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/1.log" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.540726 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/0.log" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.540853 4656 generic.go:334] "Generic (PLEG): container finished" podID="7662a84d-d9cb-4684-b76f-c63ffeff8344" containerID="c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638" exitCode=1 Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.540927 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerDied","Data":"c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638"} Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.540999 4656 scope.go:117] "RemoveContainer" containerID="469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.541484 4656 scope.go:117] "RemoveContainer" containerID="c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.541718 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rpzjg_openshift-multus(7662a84d-d9cb-4684-b76f-c63ffeff8344)\"" pod="openshift-multus/multus-rpzjg" podUID="7662a84d-d9cb-4684-b76f-c63ffeff8344" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.563373 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" podStartSLOduration=106.563351109 podStartE2EDuration="1m46.563351109s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:44.531790789 +0000 UTC m=+135.039961593" watchObservedRunningTime="2026-01-28 15:20:52.563351109 +0000 UTC m=+143.071521913" Jan 28 15:20:53 crc kubenswrapper[4656]: I0128 15:20:53.170458 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:53 crc kubenswrapper[4656]: E0128 15:20:53.170892 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:53 crc kubenswrapper[4656]: I0128 15:20:53.171212 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:20:53 crc kubenswrapper[4656]: E0128 15:20:53.171390 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:53 crc kubenswrapper[4656]: I0128 15:20:53.545799 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/1.log" Jan 28 15:20:54 crc kubenswrapper[4656]: I0128 15:20:54.170374 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:54 crc kubenswrapper[4656]: I0128 15:20:54.170406 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:54 crc kubenswrapper[4656]: E0128 15:20:54.171070 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:54 crc kubenswrapper[4656]: E0128 15:20:54.171249 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:54 crc kubenswrapper[4656]: I0128 15:20:54.170406 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:54 crc kubenswrapper[4656]: E0128 15:20:54.171570 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:55 crc kubenswrapper[4656]: I0128 15:20:55.170067 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:55 crc kubenswrapper[4656]: E0128 15:20:55.170896 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:56 crc kubenswrapper[4656]: I0128 15:20:56.169851 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:56 crc kubenswrapper[4656]: I0128 15:20:56.169927 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:56 crc kubenswrapper[4656]: I0128 15:20:56.169891 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:56 crc kubenswrapper[4656]: E0128 15:20:56.170023 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:56 crc kubenswrapper[4656]: E0128 15:20:56.170116 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:56 crc kubenswrapper[4656]: E0128 15:20:56.170201 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:57 crc kubenswrapper[4656]: E0128 15:20:57.063517 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:57 crc kubenswrapper[4656]: I0128 15:20:57.170629 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:57 crc kubenswrapper[4656]: E0128 15:20:57.170824 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:58 crc kubenswrapper[4656]: I0128 15:20:58.170470 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:58 crc kubenswrapper[4656]: I0128 15:20:58.170485 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:58 crc kubenswrapper[4656]: I0128 15:20:58.170485 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:58 crc kubenswrapper[4656]: E0128 15:20:58.170786 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:58 crc kubenswrapper[4656]: E0128 15:20:58.170638 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:58 crc kubenswrapper[4656]: E0128 15:20:58.170883 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:59 crc kubenswrapper[4656]: I0128 15:20:59.170622 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:59 crc kubenswrapper[4656]: E0128 15:20:59.170945 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:00 crc kubenswrapper[4656]: I0128 15:21:00.169960 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:00 crc kubenswrapper[4656]: I0128 15:21:00.169986 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:00 crc kubenswrapper[4656]: I0128 15:21:00.170017 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:00 crc kubenswrapper[4656]: E0128 15:21:00.170107 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:00 crc kubenswrapper[4656]: E0128 15:21:00.170214 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:00 crc kubenswrapper[4656]: E0128 15:21:00.170340 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:01 crc kubenswrapper[4656]: I0128 15:21:01.191706 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:01 crc kubenswrapper[4656]: I0128 15:21:01.191770 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:01 crc kubenswrapper[4656]: E0128 15:21:01.196505 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:01 crc kubenswrapper[4656]: E0128 15:21:01.196670 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:02 crc kubenswrapper[4656]: E0128 15:21:02.065951 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:21:02 crc kubenswrapper[4656]: I0128 15:21:02.170221 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:02 crc kubenswrapper[4656]: I0128 15:21:02.170316 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:02 crc kubenswrapper[4656]: E0128 15:21:02.170395 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:02 crc kubenswrapper[4656]: E0128 15:21:02.170491 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:03 crc kubenswrapper[4656]: I0128 15:21:03.169833 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:03 crc kubenswrapper[4656]: E0128 15:21:03.170016 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:03 crc kubenswrapper[4656]: I0128 15:21:03.170366 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:03 crc kubenswrapper[4656]: E0128 15:21:03.170490 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:04 crc kubenswrapper[4656]: I0128 15:21:04.170505 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:04 crc kubenswrapper[4656]: E0128 15:21:04.170650 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:04 crc kubenswrapper[4656]: I0128 15:21:04.170722 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:04 crc kubenswrapper[4656]: E0128 15:21:04.170910 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:05 crc kubenswrapper[4656]: I0128 15:21:05.170506 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:05 crc kubenswrapper[4656]: E0128 15:21:05.170864 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:05 crc kubenswrapper[4656]: I0128 15:21:05.171577 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:05 crc kubenswrapper[4656]: E0128 15:21:05.171768 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:06 crc kubenswrapper[4656]: I0128 15:21:06.169619 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:06 crc kubenswrapper[4656]: I0128 15:21:06.169670 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:06 crc kubenswrapper[4656]: E0128 15:21:06.169786 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:06 crc kubenswrapper[4656]: E0128 15:21:06.169866 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:07 crc kubenswrapper[4656]: E0128 15:21:07.067672 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.169724 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.169834 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:07 crc kubenswrapper[4656]: E0128 15:21:07.169892 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:07 crc kubenswrapper[4656]: E0128 15:21:07.170086 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.170644 4656 scope.go:117] "RemoveContainer" containerID="c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.596072 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/1.log" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.596486 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd"} Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.170406 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.170424 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:08 crc kubenswrapper[4656]: E0128 15:21:08.170879 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:08 crc kubenswrapper[4656]: E0128 15:21:08.171031 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.171195 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.602128 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.604448 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.605594 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.671940 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podStartSLOduration=122.671915666 podStartE2EDuration="2m2.671915666s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:08.667317974 +0000 UTC m=+159.175488778" watchObservedRunningTime="2026-01-28 15:21:08.671915666 +0000 UTC m=+159.180086480" Jan 28 15:21:09 crc kubenswrapper[4656]: I0128 15:21:09.048819 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bmj6r"] Jan 28 15:21:09 crc kubenswrapper[4656]: I0128 15:21:09.049063 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:09 crc kubenswrapper[4656]: E0128 15:21:09.049277 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:09 crc kubenswrapper[4656]: I0128 15:21:09.173562 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:09 crc kubenswrapper[4656]: E0128 15:21:09.173694 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:10 crc kubenswrapper[4656]: I0128 15:21:10.169594 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:10 crc kubenswrapper[4656]: E0128 15:21:10.170032 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:10 crc kubenswrapper[4656]: I0128 15:21:10.170421 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:10 crc kubenswrapper[4656]: E0128 15:21:10.170552 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:11 crc kubenswrapper[4656]: I0128 15:21:11.170424 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:11 crc kubenswrapper[4656]: I0128 15:21:11.170455 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:11 crc kubenswrapper[4656]: E0128 15:21:11.172848 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:11 crc kubenswrapper[4656]: E0128 15:21:11.173086 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.169968 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.170406 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.172803 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.173117 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.173399 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.173633 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.170432 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.170845 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.173090 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.173264 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.444345 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.499781 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.500521 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.502585 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.502986 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.503868 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.504220 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.504729 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j8tlz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.505487 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.505513 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.505777 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.506138 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.506346 4656 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.506411 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.506603 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.506911 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gcdpp"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.507471 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.508372 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.508717 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.509221 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.509621 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.510354 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.510795 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.517676 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.518124 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zrrnn"] Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.520734 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: secrets "v4-0-config-user-idp-0-file-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.520781 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.520809 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-idp-0-file-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.520819 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.520959 4656 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: configmaps "audit" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.520982 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.521019 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: secrets "v4-0-config-system-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.521040 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.521191 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.527339 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8l46h"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.527591 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.527742 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hvbtc"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.528143 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rmnzt"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.528202 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.528423 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.528618 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.535143 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.535885 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537589 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f905c9bf-de63-4bbb-842b-c5cfb76fec46-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537621 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537643 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537658 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537675 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-policies\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537715 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mzpq\" (UniqueName: \"kubernetes.io/projected/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-kube-api-access-5mzpq\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537742 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-client\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537878 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538215 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-encryption-config\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538424 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538452 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-serving-cert\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538474 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb55d\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-kube-api-access-sb55d\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538489 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt568\" (UniqueName: \"kubernetes.io/projected/f0d9e967-a840-414e-9ab7-00affd50fec5-kube-api-access-mt568\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538508 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538560 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-dir\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538579 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538626 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0d9e967-a840-414e-9ab7-00affd50fec5-serving-cert\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538656 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0d9e967-a840-414e-9ab7-00affd50fec5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538686 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7gtm\" (UniqueName: \"kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538707 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538724 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538738 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wzmf\" (UniqueName: \"kubernetes.io/projected/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-kube-api-access-6wzmf\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538752 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538774 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905c9bf-de63-4bbb-842b-c5cfb76fec46-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552216 4656 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552271 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552344 4656 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552361 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552413 4656 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552431 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552488 4656 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552504 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552554 4656 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552570 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.562435 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.564230 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.564930 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672121 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njdjz\" (UniqueName: \"kubernetes.io/projected/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-kube-api-access-njdjz\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672174 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-encryption-config\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672222 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672251 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672303 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-audit-dir\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672321 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672336 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672355 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672377 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-serving-cert\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672394 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt568\" (UniqueName: \"kubernetes.io/projected/f0d9e967-a840-414e-9ab7-00affd50fec5-kube-api-access-mt568\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672422 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672438 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-images\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672458 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672476 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-service-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672497 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb55d\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-kube-api-access-sb55d\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672525 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-dir\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672594 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672621 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0d9e967-a840-414e-9ab7-00affd50fec5-serving-cert\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672636 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-image-import-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672655 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672675 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672693 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fd1877-7bc3-4808-8a45-716da7b829e5-serving-cert\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672712 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-metrics-tls\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672732 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-encryption-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmtn\" (UniqueName: \"kubernetes.io/projected/57598ddc-f214-47b1-bdef-10bdf94607d1-kube-api-access-mjmtn\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672770 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0d9e967-a840-414e-9ab7-00affd50fec5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672795 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672818 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqj9\" (UniqueName: \"kubernetes.io/projected/e2fd1877-7bc3-4808-8a45-716da7b829e5-kube-api-access-xkqj9\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672834 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672852 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d96cdec-34f1-44e2-9380-40475a720b31-machine-approver-tls\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672873 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7gtm\" (UniqueName: \"kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672893 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672913 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlqkc\" (UniqueName: \"kubernetes.io/projected/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-kube-api-access-xlqkc\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672929 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-config\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672947 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672976 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673004 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-serving-cert\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673027 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wzmf\" (UniqueName: \"kubernetes.io/projected/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-kube-api-access-6wzmf\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673064 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-config\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673109 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673134 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673162 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673182 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673228 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905c9bf-de63-4bbb-842b-c5cfb76fec46-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673257 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp7b\" (UniqueName: \"kubernetes.io/projected/d903ef3d-1544-4343-b254-15939a05fec0-kube-api-access-2cp7b\") pod \"downloads-7954f5f757-zrrnn\" (UID: \"d903ef3d-1544-4343-b254-15939a05fec0\") " pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673279 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673300 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlrc9\" (UniqueName: \"kubernetes.io/projected/679bac75-fbd8-4a24-ad40-9c5d10860c90-kube-api-access-jlrc9\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673322 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673373 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krlkb\" (UniqueName: \"kubernetes.io/projected/4d96cdec-34f1-44e2-9380-40475a720b31-kube-api-access-krlkb\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673397 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f905c9bf-de63-4bbb-842b-c5cfb76fec46-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673432 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673453 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673479 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673497 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-serving-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673518 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-audit\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673545 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-client\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673564 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673585 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679bac75-fbd8-4a24-ad40-9c5d10860c90-serving-cert\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673604 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-trusted-ca\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673637 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673664 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673735 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673758 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673783 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-policies\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673808 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673832 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-node-pullsecrets\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673856 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mzpq\" (UniqueName: \"kubernetes.io/projected/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-kube-api-access-5mzpq\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673879 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-client\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673931 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673958 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673993 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.674014 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.674043 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-config\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.676251 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-dir\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.676821 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.677120 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0d9e967-a840-414e-9ab7-00affd50fec5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.686867 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: secrets "v4-0-config-user-template-error" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.686932 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-error\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.687214 4656 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: secrets "oauth-openshift-dockercfg-znhcc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.687235 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-openshift-dockercfg-znhcc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.689579 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.691524 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.695562 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.695961 4656 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.695998 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.696094 4656 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: secrets "samples-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.696112 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"samples-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.696533 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c8r6q"] Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.696955 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: secrets "v4-0-config-user-template-provider-selection" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.696979 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-provider-selection\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.697269 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.699329 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.699535 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.699920 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.700477 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.700926 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.701153 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.701347 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.701490 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.701653 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.711120 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.726414 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.726909 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.727837 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.727963 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728082 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728278 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728409 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.727915 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728306 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728568 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729104 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729474 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729612 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729819 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729932 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730067 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730172 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730483 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730599 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730687 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730799 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730967 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731145 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731458 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731560 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731632 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731696 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731761 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731833 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731909 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732011 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732092 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732263 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732360 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732468 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732467 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732991 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733146 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733296 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733443 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733556 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733740 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733849 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733846 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734177 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734259 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734404 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734808 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734859 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735382 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735548 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735885 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735976 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.736228 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.736323 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-policies\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735897 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.737273 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.737766 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.737958 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.738102 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.739492 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.740891 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.740939 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.741907 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.743249 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.744704 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.745117 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.747438 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.747674 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.747994 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0d9e967-a840-414e-9ab7-00affd50fec5-serving-cert\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748256 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748284 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748665 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748806 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748820 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748954 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748973 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.749287 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.753277 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.755727 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.758380 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.766725 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905c9bf-de63-4bbb-842b-c5cfb76fec46-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.768300 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-client\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.768586 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-encryption-config\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.768963 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.769891 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.771507 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-serving-cert\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.772213 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.772227 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f905c9bf-de63-4bbb-842b-c5cfb76fec46-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.774337 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.774869 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.775145 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-image-import-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797511 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797567 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797592 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797614 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fd1877-7bc3-4808-8a45-716da7b829e5-serving-cert\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797630 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-metrics-tls\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797754 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-encryption-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797777 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmtn\" (UniqueName: \"kubernetes.io/projected/57598ddc-f214-47b1-bdef-10bdf94607d1-kube-api-access-mjmtn\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797803 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797822 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqj9\" (UniqueName: \"kubernetes.io/projected/e2fd1877-7bc3-4808-8a45-716da7b829e5-kube-api-access-xkqj9\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797850 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797874 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d96cdec-34f1-44e2-9380-40475a720b31-machine-approver-tls\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798020 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlqkc\" (UniqueName: \"kubernetes.io/projected/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-kube-api-access-xlqkc\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798070 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-config\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798115 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-serving-cert\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798143 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-config\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798186 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798223 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798246 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798264 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798293 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp7b\" (UniqueName: \"kubernetes.io/projected/d903ef3d-1544-4343-b254-15939a05fec0-kube-api-access-2cp7b\") pod \"downloads-7954f5f757-zrrnn\" (UID: \"d903ef3d-1544-4343-b254-15939a05fec0\") " pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798313 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798332 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798353 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlrc9\" (UniqueName: \"kubernetes.io/projected/679bac75-fbd8-4a24-ad40-9c5d10860c90-kube-api-access-jlrc9\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798383 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798406 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krlkb\" (UniqueName: \"kubernetes.io/projected/4d96cdec-34f1-44e2-9380-40475a720b31-kube-api-access-krlkb\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798425 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798451 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798475 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798498 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-serving-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798543 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-audit\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798560 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-client\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798596 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798617 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679bac75-fbd8-4a24-ad40-9c5d10860c90-serving-cert\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798634 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-trusted-ca\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798654 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798674 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798703 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798724 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-node-pullsecrets\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798792 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798918 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799006 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799032 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799055 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-config\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799076 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njdjz\" (UniqueName: \"kubernetes.io/projected/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-kube-api-access-njdjz\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799131 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799188 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799213 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799247 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-audit-dir\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799273 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799296 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799321 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799363 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-images\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799389 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799419 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-service-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.776356 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-image-import-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.776615 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.794269 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt568\" (UniqueName: \"kubernetes.io/projected/f0d9e967-a840-414e-9ab7-00affd50fec5-kube-api-access-mt568\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.796343 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.794810 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.795049 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.801679 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-config\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.806030 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-service-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.806787 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.807563 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mzpq\" (UniqueName: \"kubernetes.io/projected/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-kube-api-access-5mzpq\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.808686 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.809487 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-config\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.835200 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d96cdec-34f1-44e2-9380-40475a720b31-machine-approver-tls\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.835602 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.837328 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.837528 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.837817 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.838511 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-encryption-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.838695 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb55d\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-kube-api-access-sb55d\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.838812 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.841668 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.843314 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.843859 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-serving-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.844042 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.848699 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.849264 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.850402 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.851150 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.851747 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-config\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.852185 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.852286 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-audit-dir\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.853085 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-images\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.853555 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.854081 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-audit\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.854462 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.856807 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-node-pullsecrets\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.857674 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-metrics-tls\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.857968 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-trusted-ca\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.859181 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fd1877-7bc3-4808-8a45-716da7b829e5-serving-cert\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.859777 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-client\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.860122 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qh5kz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.860879 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.861578 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.861815 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.861820 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.865936 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679bac75-fbd8-4a24-ad40-9c5d10860c90-serving-cert\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.866024 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.866744 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.868466 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.868680 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.874138 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.874873 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.875119 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tvwnv"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.875843 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.876109 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.876390 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.877678 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-serving-cert\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.881293 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.891409 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.892544 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.893004 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.893745 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.896603 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.898949 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wzmf\" (UniqueName: \"kubernetes.io/projected/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-kube-api-access-6wzmf\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.900032 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901009 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c96c02d3-be15-47e5-a4bd-e65644751b10-trusted-ca\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901047 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzc7g\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-kube-api-access-rzc7g\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901073 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68e8cbb8-5319-4c56-9636-3bcefa32d29e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901099 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901122 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901146 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e8cbb8-5319-4c56-9636-3bcefa32d29e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901214 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901254 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901311 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901350 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901383 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c96c02d3-be15-47e5-a4bd-e65644751b10-metrics-tls\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901444 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k672\" (UniqueName: \"kubernetes.io/projected/68e8cbb8-5319-4c56-9636-3bcefa32d29e-kube-api-access-5k672\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901525 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901588 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.902065 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.903402 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.903798 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.904083 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.904306 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.905452 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.905811 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.907819 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.908038 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.908539 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.909235 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.909870 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.910414 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.912618 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.913048 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.914109 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.915442 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.915596 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.917331 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z9pc5"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.917838 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6596q"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.918177 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.918392 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9f8ct"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.918454 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.918759 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.920152 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.920566 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.922315 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gcdpp"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.922502 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.922709 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.923874 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.927092 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8l46h"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.929962 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.929985 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.931147 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j8tlz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.932350 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.933445 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zrrnn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.934757 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.936016 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sg88v"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.937017 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.937620 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.942680 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.943129 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.945267 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.962474 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tvwnv"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.962530 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.964195 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.965084 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.965403 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jgtrg"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.967227 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.967521 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fzfxb"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.968901 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.972513 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.980471 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.984861 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.999286 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.002387 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c96c02d3-be15-47e5-a4bd-e65644751b10-trusted-ca\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.002437 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzc7g\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-kube-api-access-rzc7g\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.002468 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68e8cbb8-5319-4c56-9636-3bcefa32d29e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.002503 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e8cbb8-5319-4c56-9636-3bcefa32d29e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.003528 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.003584 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c96c02d3-be15-47e5-a4bd-e65644751b10-metrics-tls\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.003649 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k672\" (UniqueName: \"kubernetes.io/projected/68e8cbb8-5319-4c56-9636-3bcefa32d29e-kube-api-access-5k672\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.003730 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.004193 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.004388 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e8cbb8-5319-4c56-9636-3bcefa32d29e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.005500 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hvbtc"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.006827 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.007625 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rmnzt"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.008533 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68e8cbb8-5319-4c56-9636-3bcefa32d29e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.009145 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.009265 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.011384 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.011428 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.012832 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.013727 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c8r6q"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.015397 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.017089 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.024793 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.024843 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fzfxb"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.026333 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.027657 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sg88v"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.030688 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.034746 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jgtrg"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.038842 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6596q"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.041486 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.043241 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z9pc5"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.043790 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.047936 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.065050 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.083661 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.155620 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.157872 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.158117 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.160010 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.171093 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.187025 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.203455 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.211798 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c96c02d3-be15-47e5-a4bd-e65644751b10-metrics-tls\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.241176 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.246664 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c96c02d3-be15-47e5-a4bd-e65644751b10-trusted-ca\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.247028 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.272817 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.298270 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.302422 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.318238 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.356835 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlqkc\" (UniqueName: \"kubernetes.io/projected/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-kube-api-access-xlqkc\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.362769 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqj9\" (UniqueName: \"kubernetes.io/projected/e2fd1877-7bc3-4808-8a45-716da7b829e5-kube-api-access-xkqj9\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.381578 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmtn\" (UniqueName: \"kubernetes.io/projected/57598ddc-f214-47b1-bdef-10bdf94607d1-kube-api-access-mjmtn\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.387225 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.387822 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.404591 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.426341 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.449690 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.449718 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.452918 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.466941 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.470770 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.485531 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.528976 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njdjz\" (UniqueName: \"kubernetes.io/projected/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-kube-api-access-njdjz\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.538113 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.548323 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.562753 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krlkb\" (UniqueName: \"kubernetes.io/projected/4d96cdec-34f1-44e2-9380-40475a720b31-kube-api-access-krlkb\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.580354 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlrc9\" (UniqueName: \"kubernetes.io/projected/679bac75-fbd8-4a24-ad40-9c5d10860c90-kube-api-access-jlrc9\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.587206 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.603585 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.603680 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp7b\" (UniqueName: \"kubernetes.io/projected/d903ef3d-1544-4343-b254-15939a05fec0-kube-api-access-2cp7b\") pod \"downloads-7954f5f757-zrrnn\" (UID: \"d903ef3d-1544-4343-b254-15939a05fec0\") " pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.623448 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.643883 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.668226 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.676165 4656 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.676323 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls podName:0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.176279493 +0000 UTC m=+165.684450297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-jcx9v" (UID: "0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.792305 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.792564 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.794432 4656 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.794485 4656 projected.go:194] Error preparing data for projected volume kube-api-access-w7gtm for pod openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.794599 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm podName:0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.294574803 +0000 UTC m=+165.802745607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w7gtm" (UniqueName: "kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm") pod "cluster-samples-operator-665b6dd947-jcx9v" (UID: "0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.800944 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.802162 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.30212449 +0000 UTC m=+165.810295314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.802292 4656 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.802333 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config podName:a9d5ce28-bfd3-4a89-9339-e2df3378e9d7 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.302321926 +0000 UTC m=+165.810492730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config") pod "route-controller-manager-6576b87f9c-4twj5" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.802674 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.749466 4656 configmap.go:193] Couldn't get configMap openshift-authentication/audit: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.803897 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.30386857 +0000 UTC m=+165.812039374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.809396 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.809610 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.809728 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.810048 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.813543 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.816673 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.816811 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.316785092 +0000 UTC m=+165.824955896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.819379 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.823286 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.843157 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.851376 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.851479 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.351449068 +0000 UTC m=+165.859619872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.851779 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.851818 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.351807928 +0000 UTC m=+165.859978742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.852114 4656 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.852245 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert podName:a9d5ce28-bfd3-4a89-9339-e2df3378e9d7 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.352184759 +0000 UTC m=+165.860355623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert") pod "route-controller-manager-6576b87f9c-4twj5" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.855319 4656 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.855377 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config podName:4d96cdec-34f1-44e2-9380-40475a720b31 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.35536283 +0000 UTC m=+165.863533674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config") pod "machine-approver-56656f9798-6zb9x" (UID: "4d96cdec-34f1-44e2-9380-40475a720b31") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.855403 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.855432 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.355423832 +0000 UTC m=+165.863594726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.857504 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" event={"ID":"f905c9bf-de63-4bbb-842b-c5cfb76fec46","Type":"ContainerStarted","Data":"adf139bd697bec0674abe58031b80d437f7f4d3a3679e017234792fa60a606ba"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.857551 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" event={"ID":"f905c9bf-de63-4bbb-842b-c5cfb76fec46","Type":"ContainerStarted","Data":"011c8afe04db65fbf680bb337a4fd0ebe7f1f4afb95b9916dd7c58091c717b26"} Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.857943 4656 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.858035 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca podName:a9d5ce28-bfd3-4a89-9339-e2df3378e9d7 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.358010236 +0000 UTC m=+165.866181110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca") pod "route-controller-manager-6576b87f9c-4twj5" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.863885 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.882366 4656 request.go:700] Waited for 1.015349101s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0 Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.882765 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" event={"ID":"a23fe8b6-b461-4abb-ad2a-2bdd501fad81","Type":"ContainerStarted","Data":"47d6029e172ca163a5c77a05a8c8e3969c6c35e6abe2418dde1d0845ef20182a"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.888603 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.904992 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.905732 4656 generic.go:334] "Generic (PLEG): container finished" podID="f0d9e967-a840-414e-9ab7-00affd50fec5" containerID="68c5cefbc1f9da95ec54c3f2f1119adf7e7b1105d458136dd149459d12044eea" exitCode=0 Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.907374 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" event={"ID":"f0d9e967-a840-414e-9ab7-00affd50fec5","Type":"ContainerDied","Data":"68c5cefbc1f9da95ec54c3f2f1119adf7e7b1105d458136dd149459d12044eea"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.924209 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.932214 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j8tlz"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.932274 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" event={"ID":"f0d9e967-a840-414e-9ab7-00affd50fec5","Type":"ContainerStarted","Data":"17a56a7b997ab4d612373861e24a4dab84680e21263fe8eff34d95c613acbda5"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.932319 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" event={"ID":"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0","Type":"ContainerStarted","Data":"bb1ad2570d532bbc4a7045efd967d8de033a0a667b76b021f4fe18bd23f293e4"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.932345 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" event={"ID":"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0","Type":"ContainerStarted","Data":"3a349004818c7610c283987546efff6a56fda5a459d8905902cba0e208d7ce83"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.937379 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hvbtc"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.937427 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.948313 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:21:14 crc kubenswrapper[4656]: W0128 15:21:14.950835 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57598ddc_f214_47b1_bdef_10bdf94607d1.slice/crio-bfe1ae8bf9d64e2c7b7ade30ca6cb508be53e47c35c373b451803d08422d9897 WatchSource:0}: Error finding container bfe1ae8bf9d64e2c7b7ade30ca6cb508be53e47c35c373b451803d08422d9897: Status 404 returned error can't find the container with id bfe1ae8bf9d64e2c7b7ade30ca6cb508be53e47c35c373b451803d08422d9897 Jan 28 15:21:14 crc kubenswrapper[4656]: W0128 15:21:14.957670 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97f85e75_6682_490f_9f1d_cdf924a67f38.slice/crio-c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c WatchSource:0}: Error finding container c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c: Status 404 returned error can't find the container with id c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.963533 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:21:14 crc kubenswrapper[4656]: W0128 15:21:14.967392 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e4f467e_8a0b_4e05_a393_3dcc06f5b8a4.slice/crio-7c582cab7e76e09a74755f708ec7512725851b77adaf41dbdd60dffa45f19603 WatchSource:0}: Error finding container 7c582cab7e76e09a74755f708ec7512725851b77adaf41dbdd60dffa45f19603: Status 404 returned error can't find the container with id 7c582cab7e76e09a74755f708ec7512725851b77adaf41dbdd60dffa45f19603 Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.983831 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.008178 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.024567 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.044479 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.068218 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.084360 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.107324 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.122607 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.143685 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.164068 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.183486 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.189092 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8l46h"] Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.210118 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.210526 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zrrnn"] Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.231273 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.245825 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.263173 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.283156 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.308688 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.312814 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7gtm\" (UniqueName: \"kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.312911 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.313057 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.313142 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.390778 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.391381 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.393111 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rmnzt"] Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.393267 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.400070 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.408178 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.413986 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414049 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414077 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414225 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414247 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414266 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414295 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:15 crc kubenswrapper[4656]: W0128 15:21:15.415071 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2fd1877_7bc3_4808_8a45_716da7b829e5.slice/crio-7fde1bf31974dffc7925336c520693e3a96c2d054c75849032b196bf5246ab0b WatchSource:0}: Error finding container 7fde1bf31974dffc7925336c520693e3a96c2d054c75849032b196bf5246ab0b: Status 404 returned error can't find the container with id 7fde1bf31974dffc7925336c520693e3a96c2d054c75849032b196bf5246ab0b Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.415284 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gcdpp"] Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.424576 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.443821 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.465148 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.477102 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.482497 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.504160 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.523324 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:21:15 crc kubenswrapper[4656]: E0128 15:21:15.538452 4656 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:15 crc kubenswrapper[4656]: E0128 15:21:15.538531 4656 projected.go:194] Error preparing data for projected volume kube-api-access-d7qd6 for pod openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:15 crc kubenswrapper[4656]: E0128 15:21:15.538610 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6 podName:a9d5ce28-bfd3-4a89-9339-e2df3378e9d7 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.038584867 +0000 UTC m=+166.546755671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d7qd6" (UniqueName: "kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6") pod "route-controller-manager-6576b87f9c-4twj5" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.544268 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.563656 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.585379 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.602931 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.628311 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.644144 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.665046 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.671476 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:21:15 crc kubenswrapper[4656]: W0128 15:21:15.681405 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacd5c0d8_8e06_4dfe_9e89_fd89b194ec1f.slice/crio-c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017 WatchSource:0}: Error finding container c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017: Status 404 returned error can't find the container with id c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017 Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.683635 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.704454 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.727001 4656 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.744023 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.764676 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.783317 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.803612 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.822917 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.843696 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.863421 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.884068 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.902066 4656 request.go:700] Waited for 1.899246885s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.924140 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzc7g\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-kube-api-access-rzc7g\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.931180 4656 generic.go:334] "Generic (PLEG): container finished" podID="a23fe8b6-b461-4abb-ad2a-2bdd501fad81" containerID="58e486120f1f647b424606ea52dd79bd76d1d9378d9c0c2886828037842d10c9" exitCode=0 Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.931337 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" event={"ID":"a23fe8b6-b461-4abb-ad2a-2bdd501fad81","Type":"ContainerDied","Data":"58e486120f1f647b424606ea52dd79bd76d1d9378d9c0c2886828037842d10c9"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.935499 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" event={"ID":"f0d9e967-a840-414e-9ab7-00affd50fec5","Type":"ContainerStarted","Data":"ed1a824de788fff7cfd939d149b3a63fd7c080220f0a8764d97b6d9e277a9b87"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.935671 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.941839 4656 generic.go:334] "Generic (PLEG): container finished" podID="57598ddc-f214-47b1-bdef-10bdf94607d1" containerID="27e7f76b68c2bbef12bdeffbe31575151e8edb14b989f7699ace53dac1631768" exitCode=0 Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.941919 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" event={"ID":"57598ddc-f214-47b1-bdef-10bdf94607d1","Type":"ContainerDied","Data":"27e7f76b68c2bbef12bdeffbe31575151e8edb14b989f7699ace53dac1631768"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.941955 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" event={"ID":"57598ddc-f214-47b1-bdef-10bdf94607d1","Type":"ContainerStarted","Data":"bfe1ae8bf9d64e2c7b7ade30ca6cb508be53e47c35c373b451803d08422d9897"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.943490 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k672\" (UniqueName: \"kubernetes.io/projected/68e8cbb8-5319-4c56-9636-3bcefa32d29e-kube-api-access-5k672\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.947404 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jrkdc" event={"ID":"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f","Type":"ContainerStarted","Data":"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.947459 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jrkdc" event={"ID":"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f","Type":"ContainerStarted","Data":"c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.951268 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" event={"ID":"679bac75-fbd8-4a24-ad40-9c5d10860c90","Type":"ContainerStarted","Data":"f2c476200ac3fad60f27013e1ecb1ed0845b932ae4f0efae3bf6f9dd0eb28a5b"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.951325 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" event={"ID":"679bac75-fbd8-4a24-ad40-9c5d10860c90","Type":"ContainerStarted","Data":"e4efb46b242bf0aceff6dfd94d827dba0c1e85e29b1cb80d05ed486af5952d2b"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.957673 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" event={"ID":"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4","Type":"ContainerStarted","Data":"0284e1fff81c3dc906a1db43941a3d4c409b0ab683203b8c6b946291e1751ef1"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.957722 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" event={"ID":"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4","Type":"ContainerStarted","Data":"445c1ec0ef703c76041530865f710373188bc739c7df2f5d1fe1bc8b800e07cf"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.957734 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" event={"ID":"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4","Type":"ContainerStarted","Data":"7c582cab7e76e09a74755f708ec7512725851b77adaf41dbdd60dffa45f19603"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.959873 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" event={"ID":"97f85e75-6682-490f-9f1d-cdf924a67f38","Type":"ContainerStarted","Data":"f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.959902 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" event={"ID":"97f85e75-6682-490f-9f1d-cdf924a67f38","Type":"ContainerStarted","Data":"c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.960574 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.962133 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" event={"ID":"e2fd1877-7bc3-4808-8a45-716da7b829e5","Type":"ContainerStarted","Data":"7a1663e370e51c8d4426eaedc32fd875143ba194d4621648d85c9defc7de549d"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.962193 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" event={"ID":"e2fd1877-7bc3-4808-8a45-716da7b829e5","Type":"ContainerStarted","Data":"7fde1bf31974dffc7925336c520693e3a96c2d054c75849032b196bf5246ab0b"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.962678 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.964147 4656 patch_prober.go:28] interesting pod/console-operator-58897d9998-rmnzt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.964234 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" podUID="e2fd1877-7bc3-4808-8a45-716da7b829e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.964717 4656 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l99lt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.964751 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.967076 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" event={"ID":"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9","Type":"ContainerStarted","Data":"d7c345022a99590d867022d672af1c88b341e1164a6513b904b3a6971299921f"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.967120 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" event={"ID":"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9","Type":"ContainerStarted","Data":"d35c67a4b06b0313ea9859c580860672c985d6796a0bdd76aa918e04c4a93300"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.967133 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" event={"ID":"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9","Type":"ContainerStarted","Data":"5326e46627dc203978818d798dd745a3c2c2102f70c971fda55c35276af43f67"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.968472 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.973437 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrrnn" event={"ID":"d903ef3d-1544-4343-b254-15939a05fec0","Type":"ContainerStarted","Data":"60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.973911 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.974000 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrrnn" event={"ID":"d903ef3d-1544-4343-b254-15939a05fec0","Type":"ContainerStarted","Data":"73be6363650d9e30dc18d3355d038b1a30ec5a219ce906d880234a62de3b63b0"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.974679 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.974814 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.985393 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.995351 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.003389 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.010545 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028091 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028229 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8947t\" (UniqueName: \"kubernetes.io/projected/8be343e6-2e63-426f-95cc-06f64f7417cc-kube-api-access-8947t\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028317 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-client\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028382 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be343e6-2e63-426f-95cc-06f64f7417cc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028419 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-service-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028494 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be343e6-2e63-426f-95cc-06f64f7417cc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028522 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028538 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.031055 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.031147 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-serving-cert\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.031268 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032016 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032152 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032320 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032415 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-config\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032656 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7f9j\" (UniqueName: \"kubernetes.io/projected/c7db3547-02a4-4214-ad16-0b513f48b6d7-kube-api-access-k7f9j\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032884 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.033295 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.533278674 +0000 UTC m=+167.041449568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.033633 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.036698 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.047537 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.063875 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.085886 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.106169 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.107894 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.117555 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.123010 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134393 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134685 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134923 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-service-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134966 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk6ws\" (UniqueName: \"kubernetes.io/projected/585af9c8-b122-49d3-8640-3dc5fb1613ab-kube-api-access-pk6ws\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134994 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-mountpoint-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135021 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be343e6-2e63-426f-95cc-06f64f7417cc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135044 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9f9g\" (UniqueName: \"kubernetes.io/projected/e4b1709f-307f-47d0-8648-125e2514c80e-kube-api-access-j9f9g\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135069 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-apiservice-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135091 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-plugins-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135113 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135131 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135152 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-registration-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135173 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-certs\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135209 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2klt7\" (UniqueName: \"kubernetes.io/projected/4c5b7670-d8d9-4ea1-822d-788709c62ee5-kube-api-access-2klt7\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135254 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-profile-collector-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135277 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9360e3-7265-42ec-b104-d62ab6ec66f4-serving-cert\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135300 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgh67\" (UniqueName: \"kubernetes.io/projected/50ad627b-637f-4763-96c1-4c1beb352c70-kube-api-access-zgh67\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135324 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5b6\" (UniqueName: \"kubernetes.io/projected/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-kube-api-access-gk5b6\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135373 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-config\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135396 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135421 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-metrics-tls\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135443 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gct8j\" (UniqueName: \"kubernetes.io/projected/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-kube-api-access-gct8j\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.135484 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.635449221 +0000 UTC m=+167.143620025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135571 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135610 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-default-certificate\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135638 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390f2e18-1f51-46da-93cf-da6b0d524b0d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135672 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135742 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c50cc4de-dd25-4337-a532-3384d5a87626-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135800 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7f9j\" (UniqueName: \"kubernetes.io/projected/c7db3547-02a4-4214-ad16-0b513f48b6d7-kube-api-access-k7f9j\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135829 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6w9g\" (UniqueName: \"kubernetes.io/projected/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-kube-api-access-b6w9g\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135854 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-stats-auth\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135886 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135910 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-tmpfs\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135936 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6v7n\" (UniqueName: \"kubernetes.io/projected/39d8752e-2237-4115-b66c-a9afc736dffe-kube-api-access-d6v7n\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135975 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-node-bootstrap-token\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136003 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-cert\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136050 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-config-volume\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136071 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390f2e18-1f51-46da-93cf-da6b0d524b0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136094 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-service-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136101 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136129 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkd52\" (UniqueName: \"kubernetes.io/projected/d05cf23a-0ecb-4cd3-bafe-0fd7d930d916-kube-api-access-zkd52\") pod \"migrator-59844c95c7-xlcjq\" (UID: \"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136154 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136176 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n94ds\" (UniqueName: \"kubernetes.io/projected/01e19302-0470-49dd-88d5-9a568e820278-kube-api-access-n94ds\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136212 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136231 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz67g\" (UniqueName: \"kubernetes.io/projected/7f9360e3-7265-42ec-b104-d62ab6ec66f4-kube-api-access-lz67g\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136241 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be343e6-2e63-426f-95cc-06f64f7417cc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136336 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136358 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-client\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136377 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136393 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-webhook-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136427 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8lzx\" (UniqueName: \"kubernetes.io/projected/c50cc4de-dd25-4337-a532-3384d5a87626-kube-api-access-g8lzx\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136474 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8tb7\" (UniqueName: \"kubernetes.io/projected/d8d474a0-0bf1-467d-ab77-4b94a17f7881-kube-api-access-n8tb7\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136495 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-socket-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136511 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01e19302-0470-49dd-88d5-9a568e820278-service-ca-bundle\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136527 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/50ad627b-637f-4763-96c1-4c1beb352c70-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136580 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-config\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136599 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39d8752e-2237-4115-b66c-a9afc736dffe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136620 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtg6z\" (UniqueName: \"kubernetes.io/projected/854de0fc-bfab-4d4d-9931-b84561234f71-kube-api-access-rtg6z\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136638 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-cabundle\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136655 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9360e3-7265-42ec-b104-d62ab6ec66f4-config\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136704 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-serving-cert\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136718 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-srv-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136751 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27whz\" (UniqueName: \"kubernetes.io/projected/3a1daea1-c036-4141-95dc-ce3567519970-kube-api-access-27whz\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136780 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136827 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136844 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-config\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136868 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-metrics-certs\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136883 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8d474a0-0bf1-467d-ab77-4b94a17f7881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136903 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx79z\" (UniqueName: \"kubernetes.io/projected/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-kube-api-access-wx79z\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136937 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/585af9c8-b122-49d3-8640-3dc5fb1613ab-proxy-tls\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136951 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136980 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136995 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-images\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137011 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137027 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-profile-collector-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137043 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39d8752e-2237-4115-b66c-a9afc736dffe-proxy-tls\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137044 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137116 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137135 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137153 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137180 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-srv-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137224 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-key\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137256 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137274 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137288 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-csi-data-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137323 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/390f2e18-1f51-46da-93cf-da6b0d524b0d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137351 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8947t\" (UniqueName: \"kubernetes.io/projected/8be343e6-2e63-426f-95cc-06f64f7417cc-kube-api-access-8947t\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137389 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be343e6-2e63-426f-95cc-06f64f7417cc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137838 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.138297 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.139429 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.141777 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.146432 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be343e6-2e63-426f-95cc-06f64f7417cc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.147025 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.147533 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-config\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.147851 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.647835677 +0000 UTC m=+167.156006481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.148836 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-client\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.155906 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.157711 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-serving-cert\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.164361 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.167443 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.186743 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.187310 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.187823 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.190594 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.199278 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.207624 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.211558 4656 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.211706 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls podName:0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.211674352 +0000 UTC m=+167.719845146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-jcx9v" (UID: "0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.220567 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.224984 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.225052 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239107 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.239296 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.739266525 +0000 UTC m=+167.247437339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239359 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-default-certificate\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239410 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390f2e18-1f51-46da-93cf-da6b0d524b0d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239436 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239659 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6w9g\" (UniqueName: \"kubernetes.io/projected/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-kube-api-access-b6w9g\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239694 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c50cc4de-dd25-4337-a532-3384d5a87626-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239753 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-stats-auth\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239779 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-tmpfs\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239844 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6v7n\" (UniqueName: \"kubernetes.io/projected/39d8752e-2237-4115-b66c-a9afc736dffe-kube-api-access-d6v7n\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239876 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-node-bootstrap-token\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239933 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-cert\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240275 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-config-volume\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240303 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390f2e18-1f51-46da-93cf-da6b0d524b0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240370 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkd52\" (UniqueName: \"kubernetes.io/projected/d05cf23a-0ecb-4cd3-bafe-0fd7d930d916-kube-api-access-zkd52\") pod \"migrator-59844c95c7-xlcjq\" (UID: \"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240433 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240462 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n94ds\" (UniqueName: \"kubernetes.io/projected/01e19302-0470-49dd-88d5-9a568e820278-kube-api-access-n94ds\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240521 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240551 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz67g\" (UniqueName: \"kubernetes.io/projected/7f9360e3-7265-42ec-b104-d62ab6ec66f4-kube-api-access-lz67g\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240614 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240643 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240696 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-webhook-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240721 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8lzx\" (UniqueName: \"kubernetes.io/projected/c50cc4de-dd25-4337-a532-3384d5a87626-kube-api-access-g8lzx\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240773 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8tb7\" (UniqueName: \"kubernetes.io/projected/d8d474a0-0bf1-467d-ab77-4b94a17f7881-kube-api-access-n8tb7\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240801 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-socket-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240823 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/50ad627b-637f-4763-96c1-4c1beb352c70-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240879 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01e19302-0470-49dd-88d5-9a568e820278-service-ca-bundle\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240942 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-config\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240976 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39d8752e-2237-4115-b66c-a9afc736dffe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtg6z\" (UniqueName: \"kubernetes.io/projected/854de0fc-bfab-4d4d-9931-b84561234f71-kube-api-access-rtg6z\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241097 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9360e3-7265-42ec-b104-d62ab6ec66f4-config\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241134 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-cabundle\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241193 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-srv-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241236 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27whz\" (UniqueName: \"kubernetes.io/projected/3a1daea1-c036-4141-95dc-ce3567519970-kube-api-access-27whz\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241270 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241362 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-metrics-certs\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241389 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8d474a0-0bf1-467d-ab77-4b94a17f7881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241443 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx79z\" (UniqueName: \"kubernetes.io/projected/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-kube-api-access-wx79z\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241677 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/585af9c8-b122-49d3-8640-3dc5fb1613ab-proxy-tls\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241708 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241785 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241840 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-images\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241909 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241935 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-profile-collector-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241957 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39d8752e-2237-4115-b66c-a9afc736dffe-proxy-tls\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241994 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242475 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-srv-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242553 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242578 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-key\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242601 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242651 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-csi-data-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242685 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/390f2e18-1f51-46da-93cf-da6b0d524b0d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242774 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk6ws\" (UniqueName: \"kubernetes.io/projected/585af9c8-b122-49d3-8640-3dc5fb1613ab-kube-api-access-pk6ws\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242815 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-mountpoint-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242842 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9f9g\" (UniqueName: \"kubernetes.io/projected/e4b1709f-307f-47d0-8648-125e2514c80e-kube-api-access-j9f9g\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242871 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-apiservice-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242897 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-registration-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242921 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-plugins-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242946 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2klt7\" (UniqueName: \"kubernetes.io/projected/4c5b7670-d8d9-4ea1-822d-788709c62ee5-kube-api-access-2klt7\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242996 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-certs\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243022 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-profile-collector-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9360e3-7265-42ec-b104-d62ab6ec66f4-serving-cert\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243358 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk5b6\" (UniqueName: \"kubernetes.io/projected/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-kube-api-access-gk5b6\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243413 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgh67\" (UniqueName: \"kubernetes.io/projected/50ad627b-637f-4763-96c1-4c1beb352c70-kube-api-access-zgh67\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243456 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-config\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243888 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-metrics-tls\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243924 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gct8j\" (UniqueName: \"kubernetes.io/projected/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-kube-api-access-gct8j\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.244377 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-mountpoint-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.246545 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-images\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.250748 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-registration-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.250858 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-plugins-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.252655 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.253524 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c50cc4de-dd25-4337-a532-3384d5a87626-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.254416 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390f2e18-1f51-46da-93cf-da6b0d524b0d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.255123 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.255270 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-csi-data-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.255698 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-tmpfs\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.256921 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-default-certificate\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.257488 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390f2e18-1f51-46da-93cf-da6b0d524b0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.259070 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-config\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.260690 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-profile-collector-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.260788 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-key\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.261353 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-webhook-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.261713 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-certs\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.261747 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262052 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262075 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-profile-collector-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262420 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9360e3-7265-42ec-b104-d62ab6ec66f4-serving-cert\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262666 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-cert\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262921 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-config-volume\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.263762 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.763745318 +0000 UTC m=+167.271916172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.264039 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01e19302-0470-49dd-88d5-9a568e820278-service-ca-bundle\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.264107 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9360e3-7265-42ec-b104-d62ab6ec66f4-config\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.264272 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-socket-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.264680 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-cabundle\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.274476 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-config\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.275062 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-srv-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.276821 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.277082 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-node-bootstrap-token\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.279083 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39d8752e-2237-4115-b66c-a9afc736dffe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.280717 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.281616 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-metrics-tls\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.283200 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8d474a0-0bf1-467d-ab77-4b94a17f7881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.283722 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-apiservice-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.283921 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.285608 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.287464 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/585af9c8-b122-49d3-8640-3dc5fb1613ab-proxy-tls\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.287981 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39d8752e-2237-4115-b66c-a9afc736dffe-proxy-tls\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.287991 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.288364 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-metrics-certs\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.289901 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.290727 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.291093 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-stats-auth\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.291790 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-srv-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.292706 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7gtm\" (UniqueName: \"kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.296098 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/50ad627b-637f-4763-96c1-4c1beb352c70-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.335891 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7f9j\" (UniqueName: \"kubernetes.io/projected/c7db3547-02a4-4214-ad16-0b513f48b6d7-kube-api-access-k7f9j\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.345350 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.347092 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.847071853 +0000 UTC m=+167.355242657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.349672 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.349895 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.350411 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.850362878 +0000 UTC m=+167.358533682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.363329 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.397497 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.400850 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8947t\" (UniqueName: \"kubernetes.io/projected/8be343e6-2e63-426f-95cc-06f64f7417cc-kube-api-access-8947t\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.403569 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.427022 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6w9g\" (UniqueName: \"kubernetes.io/projected/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-kube-api-access-b6w9g\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.441924 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.456532 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gct8j\" (UniqueName: \"kubernetes.io/projected/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-kube-api-access-gct8j\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.459700 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.460419 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.9603944 +0000 UTC m=+167.468565214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.474039 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk6ws\" (UniqueName: \"kubernetes.io/projected/585af9c8-b122-49d3-8640-3dc5fb1613ab-kube-api-access-pk6ws\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.487437 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/390f2e18-1f51-46da-93cf-da6b0d524b0d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.519949 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9f9g\" (UniqueName: \"kubernetes.io/projected/e4b1709f-307f-47d0-8648-125e2514c80e-kube-api-access-j9f9g\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.520996 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt"] Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.538875 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.549213 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx79z\" (UniqueName: \"kubernetes.io/projected/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-kube-api-access-wx79z\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.562560 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.562969 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.062951338 +0000 UTC m=+167.571122142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.564948 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2klt7\" (UniqueName: \"kubernetes.io/projected/4c5b7670-d8d9-4ea1-822d-788709c62ee5-kube-api-access-2klt7\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.569728 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.570549 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.588571 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj"] Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.607686 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.610523 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkd52\" (UniqueName: \"kubernetes.io/projected/d05cf23a-0ecb-4cd3-bafe-0fd7d930d916-kube-api-access-zkd52\") pod \"migrator-59844c95c7-xlcjq\" (UID: \"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.614228 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6v7n\" (UniqueName: \"kubernetes.io/projected/39d8752e-2237-4115-b66c-a9afc736dffe-kube-api-access-d6v7n\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.623344 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk5b6\" (UniqueName: \"kubernetes.io/projected/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-kube-api-access-gk5b6\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.644195 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.652239 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgh67\" (UniqueName: \"kubernetes.io/projected/50ad627b-637f-4763-96c1-4c1beb352c70-kube-api-access-zgh67\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.659446 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.669746 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.670310 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.670496 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.170464678 +0000 UTC m=+167.678635482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.670673 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.671249 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.17123939 +0000 UTC m=+167.679410194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.693340 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.695690 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz67g\" (UniqueName: \"kubernetes.io/projected/7f9360e3-7265-42ec-b104-d62ab6ec66f4-kube-api-access-lz67g\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.714587 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.716137 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.721982 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtg6z\" (UniqueName: \"kubernetes.io/projected/854de0fc-bfab-4d4d-9931-b84561234f71-kube-api-access-rtg6z\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.724714 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.736423 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.769565 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.769684 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.773415 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.774275 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.774560 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.274542579 +0000 UTC m=+167.782713373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.774643 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.775021 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.275014322 +0000 UTC m=+167.783185126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.784896 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.786827 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27whz\" (UniqueName: \"kubernetes.io/projected/3a1daea1-c036-4141-95dc-ce3567519970-kube-api-access-27whz\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.787484 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8lzx\" (UniqueName: \"kubernetes.io/projected/c50cc4de-dd25-4337-a532-3384d5a87626-kube-api-access-g8lzx\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.802740 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.820005 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8tb7\" (UniqueName: \"kubernetes.io/projected/d8d474a0-0bf1-467d-ab77-4b94a17f7881-kube-api-access-n8tb7\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.822084 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n94ds\" (UniqueName: \"kubernetes.io/projected/01e19302-0470-49dd-88d5-9a568e820278-kube-api-access-n94ds\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.846740 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.871775 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.875443 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.875630 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.876011 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.375991544 +0000 UTC m=+167.884162348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.893051 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.893527 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.918128 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.953495 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.972794 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.977874 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.992708 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.478915683 +0000 UTC m=+167.987086487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: W0128 15:21:17.011328 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74b5802b_b8fb_48d1_8723_2c78386825db.slice/crio-4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc WatchSource:0}: Error finding container 4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc: Status 404 returned error can't find the container with id 4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.018954 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" event={"ID":"57598ddc-f214-47b1-bdef-10bdf94607d1","Type":"ContainerStarted","Data":"c1c25d7a849348e00e28466a6d1954013a10df1897f625e7de1e4191f6e3557c"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.029311 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" event={"ID":"c96c02d3-be15-47e5-a4bd-e65644751b10","Type":"ContainerStarted","Data":"0c93c6b444188b16eb93563c92bda3521517c46f5314585cf2e6a0e50fe992f9"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.031224 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c8r6q"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.053977 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.055588 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" event={"ID":"a23fe8b6-b461-4abb-ad2a-2bdd501fad81","Type":"ContainerStarted","Data":"cec9e132b7774af4147a58a97483917bc89d2a629e7e9e677a2a5b66735ff993"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.060893 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" event={"ID":"4d96cdec-34f1-44e2-9380-40475a720b31","Type":"ContainerStarted","Data":"8292a5eabf703093297aeb509ba073fd967ad8381efe5eaa03520355550b90f2"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.063820 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" event={"ID":"68e8cbb8-5319-4c56-9636-3bcefa32d29e","Type":"ContainerStarted","Data":"67b131c844d27b471e69f8197dfc1211746c8dba7c1efa0fb6a1053449dac561"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.066553 4656 patch_prober.go:28] interesting pod/console-operator-58897d9998-rmnzt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.066592 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" podUID="e2fd1877-7bc3-4808-8a45-716da7b829e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.076107 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.076151 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.085775 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.087200 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.587180664 +0000 UTC m=+168.095351468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.089156 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.187993 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.200515 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.700494791 +0000 UTC m=+168.208665595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.282834 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.289251 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.289507 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.292015 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.79198105 +0000 UTC m=+168.300151864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.313598 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.397024 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.397825 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.897807652 +0000 UTC m=+168.405978466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.407268 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.451364 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.500946 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.501206 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.001186823 +0000 UTC m=+168.509357627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.501270 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.501663 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.001647466 +0000 UTC m=+168.509818270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.562614 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fzfxb"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.606495 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.606893 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.106877671 +0000 UTC m=+168.615048475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.657614 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.708990 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.709433 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.209414008 +0000 UTC m=+168.717584812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.811645 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.812221 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.312205412 +0000 UTC m=+168.820376216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.918361 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.918786 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.418766674 +0000 UTC m=+168.926937478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: W0128 15:21:17.920798 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a1daea1_c036_4141_95dc_ce3567519970.slice/crio-148ee8da7835b5a10fc7953862e8b5815bc845e7df914412c85063ab85c35985 WatchSource:0}: Error finding container 148ee8da7835b5a10fc7953862e8b5815bc845e7df914412c85063ab85c35985: Status 404 returned error can't find the container with id 148ee8da7835b5a10fc7953862e8b5815bc845e7df914412c85063ab85c35985 Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.971803 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz"] Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.037834 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.038359 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.538334861 +0000 UTC m=+169.046505665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.148314 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.148810 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.648792916 +0000 UTC m=+169.156963720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.155120 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fzfxb" event={"ID":"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f","Type":"ContainerStarted","Data":"c444b91f33ea3387e798dae82c23a652d26afa98a076c64a875f62f144b8c851"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.177372 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qh5kz" event={"ID":"01e19302-0470-49dd-88d5-9a568e820278","Type":"ContainerStarted","Data":"3b3c319f16c2221244c0eb7a38fdc5e703bda0da1f567d913f202c11977b7ecb"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.180998 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" event={"ID":"74b5802b-b8fb-48d1-8723-2c78386825db","Type":"ContainerStarted","Data":"4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.182379 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" event={"ID":"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7","Type":"ContainerStarted","Data":"8331e16235e5a94b4f704932116317a3f82336aecaf9afbef2cef9053d0f0822"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.196651 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" event={"ID":"8be343e6-2e63-426f-95cc-06f64f7417cc","Type":"ContainerStarted","Data":"b822a32ac38f1696b3489f1d2e55ca92480ca95e59d1e7e33e05c35e97623ba2"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.234506 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" event={"ID":"57598ddc-f214-47b1-bdef-10bdf94607d1","Type":"ContainerStarted","Data":"fbcac6d52a862a259ed1af9b8d6a9518cd40e3f84f4fd2d0d85a19ce79e560a0"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.248855 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.249930 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.74989952 +0000 UTC m=+169.258070324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.253864 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.254098 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.75406311 +0000 UTC m=+169.262233914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.280257 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" event={"ID":"4d96cdec-34f1-44e2-9380-40475a720b31","Type":"ContainerStarted","Data":"abc69d5ee42bb4db1d2eae356a4d37a0e38909a824b894a227db13527ca68810"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.294915 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" podStartSLOduration=131.294899654 podStartE2EDuration="2m11.294899654s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.253777592 +0000 UTC m=+168.761948396" watchObservedRunningTime="2026-01-28 15:21:18.294899654 +0000 UTC m=+168.803070448" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.302108 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" event={"ID":"c7db3547-02a4-4214-ad16-0b513f48b6d7","Type":"ContainerStarted","Data":"0b74a8011b4d971bf548e32aa618c17dfae7d6618e9f16a264c61aa9908601ca"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.315980 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr"] Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.318019 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9f8ct" event={"ID":"3a1daea1-c036-4141-95dc-ce3567519970","Type":"ContainerStarted","Data":"148ee8da7835b5a10fc7953862e8b5815bc845e7df914412c85063ab85c35985"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.356980 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.357938 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.857922955 +0000 UTC m=+169.366093759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.458356 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.460550 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.960529934 +0000 UTC m=+169.468700788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.485601 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" podStartSLOduration=132.485577434 podStartE2EDuration="2m12.485577434s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.371770993 +0000 UTC m=+168.879941797" watchObservedRunningTime="2026-01-28 15:21:18.485577434 +0000 UTC m=+168.993748248" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.486145 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jrkdc" podStartSLOduration=131.48614085 podStartE2EDuration="2m11.48614085s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.484829262 +0000 UTC m=+168.993000056" watchObservedRunningTime="2026-01-28 15:21:18.48614085 +0000 UTC m=+168.994311654" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.535102 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" podStartSLOduration=132.535081967 podStartE2EDuration="2m12.535081967s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.522749682 +0000 UTC m=+169.030920496" watchObservedRunningTime="2026-01-28 15:21:18.535081967 +0000 UTC m=+169.043252761" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.560165 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.560584 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.060567489 +0000 UTC m=+169.568738293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.609001 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" podStartSLOduration=131.608977521 podStartE2EDuration="2m11.608977521s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.558264253 +0000 UTC m=+169.066435057" watchObservedRunningTime="2026-01-28 15:21:18.608977521 +0000 UTC m=+169.117148325" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.644223 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zrrnn" podStartSLOduration=131.644150411 podStartE2EDuration="2m11.644150411s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.642746901 +0000 UTC m=+169.150917705" watchObservedRunningTime="2026-01-28 15:21:18.644150411 +0000 UTC m=+169.152321215" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.644860 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" podStartSLOduration=131.644855782 podStartE2EDuration="2m11.644855782s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.61140982 +0000 UTC m=+169.119580624" watchObservedRunningTime="2026-01-28 15:21:18.644855782 +0000 UTC m=+169.153026586" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.648028 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6596q"] Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.661813 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.662214 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.16220098 +0000 UTC m=+169.670371784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.769270 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.769691 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.269675589 +0000 UTC m=+169.777846393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.769822 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" podStartSLOduration=131.769808703 podStartE2EDuration="2m11.769808703s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.768926028 +0000 UTC m=+169.277096832" watchObservedRunningTime="2026-01-28 15:21:18.769808703 +0000 UTC m=+169.277979507" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.874258 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.874568 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.374556203 +0000 UTC m=+169.882727007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.925530 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.975926 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.976133 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.976189 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.976217 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.976240 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.977313 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.477288476 +0000 UTC m=+169.985459280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.978963 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.988196 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.988750 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.003044 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.086044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.086721 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.586706351 +0000 UTC m=+170.094877145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.086821 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.094665 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.168963 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.169311 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.187606 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.187950 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.68792627 +0000 UTC m=+170.196097074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.188041 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.188408 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.688399163 +0000 UTC m=+170.196569967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.205963 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" podStartSLOduration=132.205940938 podStartE2EDuration="2m12.205940938s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.129413598 +0000 UTC m=+169.637584402" watchObservedRunningTime="2026-01-28 15:21:19.205940938 +0000 UTC m=+169.714111742" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.218282 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.291932 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.292567 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.792547077 +0000 UTC m=+170.300717881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.306436 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq"] Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.306471 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj"] Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.368232 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qh5kz" event={"ID":"01e19302-0470-49dd-88d5-9a568e820278","Type":"ContainerStarted","Data":"1a1f7fa9e4a70f8c3d2d190949dcf97438110261ae70805c3877e4f327e5704a"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.398670 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.399004 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.898992386 +0000 UTC m=+170.407163190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.399034 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" event={"ID":"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4","Type":"ContainerStarted","Data":"b94ea7db76aa2d3c56f14a78828a9083d75db0fe374f0cb5f89a603eac9f45d0"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.414398 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" event={"ID":"7f9360e3-7265-42ec-b104-d62ab6ec66f4","Type":"ContainerStarted","Data":"e87c18379671a289316957a7efd3141d299c15a91e36bdf01fb658feeb82828d"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.429424 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" event={"ID":"e4b1709f-307f-47d0-8648-125e2514c80e","Type":"ContainerStarted","Data":"d53619bcd27728a4abf49cb6582faadcbd201c77d2a5febb04ed32ad423d252d"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.471643 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.472002 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.472383 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" event={"ID":"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916","Type":"ContainerStarted","Data":"ca0d33b5636daa18e114b9beb1ea9b05a2b3c0ca026d4f1dd7c94e49f0699bc4"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.493244 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9f8ct" event={"ID":"3a1daea1-c036-4141-95dc-ce3567519970","Type":"ContainerStarted","Data":"0949abc4bb4fdcf2e7310573f02d0a8320c185962ca2140cd89620611e66aaa5"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.499660 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.500427 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.0003861 +0000 UTC m=+170.508556914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.516499 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" podStartSLOduration=133.516470852 podStartE2EDuration="2m13.516470852s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.499888566 +0000 UTC m=+170.008059370" watchObservedRunningTime="2026-01-28 15:21:19.516470852 +0000 UTC m=+170.024641656" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.543030 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" event={"ID":"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7","Type":"ContainerStarted","Data":"2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.543484 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.560334 4656 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4twj5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.560405 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.578324 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" podStartSLOduration=133.57830574 podStartE2EDuration="2m13.57830574s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.57659502 +0000 UTC m=+170.084765824" watchObservedRunningTime="2026-01-28 15:21:19.57830574 +0000 UTC m=+170.086476544" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.614415 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.616092 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.116078875 +0000 UTC m=+170.624249679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.643776 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.653770 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" event={"ID":"c96c02d3-be15-47e5-a4bd-e65644751b10","Type":"ContainerStarted","Data":"9ad3257020ab53e6d6e7736abdaf27636b91d841fdedf6bab1acda9d173e8177"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.683343 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerStarted","Data":"93539b6344f80c28c33a800b6b17b3b195013261b9320da802d38ff972cee5ee"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.684329 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.692464 4656 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-66pz7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.692520 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.717799 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.719109 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.219084196 +0000 UTC m=+170.727255000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.740668 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" event={"ID":"68e8cbb8-5319-4c56-9636-3bcefa32d29e","Type":"ContainerStarted","Data":"97c4b00ccbadf2d8928e7cf1797f8981e4c7c8d407a4cbc824055c6fda18731f"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.757366 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" podStartSLOduration=133.757344015 podStartE2EDuration="2m13.757344015s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.741734467 +0000 UTC m=+170.249905271" watchObservedRunningTime="2026-01-28 15:21:19.757344015 +0000 UTC m=+170.265514819" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.815960 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" event={"ID":"74b5802b-b8fb-48d1-8723-2c78386825db","Type":"ContainerStarted","Data":"a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.816863 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.821930 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.823014 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.322999372 +0000 UTC m=+170.831170176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.833204 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" event={"ID":"c7db3547-02a4-4214-ad16-0b513f48b6d7","Type":"ContainerStarted","Data":"6fd553595df3877db63f6eb1972718c3ac308dc4aee9362b5e813833058f9b2a"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.833248 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.862445 4656 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jpgn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.862489 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.862646 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.878432 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9f8ct" podStartSLOduration=6.878411225 podStartE2EDuration="6.878411225s" podCreationTimestamp="2026-01-28 15:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.877415496 +0000 UTC m=+170.385586300" watchObservedRunningTime="2026-01-28 15:21:19.878411225 +0000 UTC m=+170.386582029" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.923405 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.926610 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.426590389 +0000 UTC m=+170.934761193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.936060 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.955284 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" podStartSLOduration=132.955258533 podStartE2EDuration="2m12.955258533s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.914248145 +0000 UTC m=+170.422418949" watchObservedRunningTime="2026-01-28 15:21:19.955258533 +0000 UTC m=+170.463429337" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.956497 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.025571 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.026147 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.52613372 +0000 UTC m=+171.034304524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.055028 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" podStartSLOduration=133.05500506 podStartE2EDuration="2m13.05500506s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.051994464 +0000 UTC m=+170.560165268" watchObservedRunningTime="2026-01-28 15:21:20.05500506 +0000 UTC m=+170.563175864" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.056046 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sg88v"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.094868 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.117854 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tvwnv"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.138344 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" podStartSLOduration=133.138326285 podStartE2EDuration="2m13.138326285s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.131222001 +0000 UTC m=+170.639392795" watchObservedRunningTime="2026-01-28 15:21:20.138326285 +0000 UTC m=+170.646497089" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.138830 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.139192 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.639148968 +0000 UTC m=+171.147319772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: W0128 15:21:20.147715 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39d8752e_2237_4115_b66c_a9afc736dffe.slice/crio-a51e5a114f6923b38afae7621e06fcf6d11d5519f48c3207f94d9dc64f5ccb33 WatchSource:0}: Error finding container a51e5a114f6923b38afae7621e06fcf6d11d5519f48c3207f94d9dc64f5ccb33: Status 404 returned error can't find the container with id a51e5a114f6923b38afae7621e06fcf6d11d5519f48c3207f94d9dc64f5ccb33 Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.151793 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.173288 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z9pc5"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.199331 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podStartSLOduration=134.199311928 podStartE2EDuration="2m14.199311928s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.17922848 +0000 UTC m=+170.687399294" watchObservedRunningTime="2026-01-28 15:21:20.199311928 +0000 UTC m=+170.707482732" Jan 28 15:21:20 crc kubenswrapper[4656]: W0128 15:21:20.230353 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c23ee3_ca6a_4660_ab9f_84c7d70b7a30.slice/crio-5138d54fc7537701a7c5ffa731096fa8969cc6fe7bb3ca7060f2413eca99bb60 WatchSource:0}: Error finding container 5138d54fc7537701a7c5ffa731096fa8969cc6fe7bb3ca7060f2413eca99bb60: Status 404 returned error can't find the container with id 5138d54fc7537701a7c5ffa731096fa8969cc6fe7bb3ca7060f2413eca99bb60 Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.244883 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.245250 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.745236137 +0000 UTC m=+171.253406941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.261102 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.365626 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.365969 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.865951877 +0000 UTC m=+171.374122681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.380020 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podStartSLOduration=133.380002491 podStartE2EDuration="2m13.380002491s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.378712214 +0000 UTC m=+170.886883018" watchObservedRunningTime="2026-01-28 15:21:20.380002491 +0000 UTC m=+170.888173295" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.384656 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" podStartSLOduration=133.384633454 podStartE2EDuration="2m13.384633454s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.342777351 +0000 UTC m=+170.850948155" watchObservedRunningTime="2026-01-28 15:21:20.384633454 +0000 UTC m=+170.892804258" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.466635 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.466917 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.966905358 +0000 UTC m=+171.475076162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.492265 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.567791 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.568200 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.068183729 +0000 UTC m=+171.576354523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.568293 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.568578 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.06857181 +0000 UTC m=+171.576742614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: W0128 15:21:20.596951 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode69663d4_ff3d_4991_804a_cf8d53a4c3ff.slice/crio-462599b582f5692efaa8c3917147798f2e336595316933b83b88f70d4dc57b35 WatchSource:0}: Error finding container 462599b582f5692efaa8c3917147798f2e336595316933b83b88f70d4dc57b35: Status 404 returned error can't find the container with id 462599b582f5692efaa8c3917147798f2e336595316933b83b88f70d4dc57b35 Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.645057 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.674776 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.675073 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.175056601 +0000 UTC m=+171.683227405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.712639 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.742861 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jgtrg"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.742923 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.777320 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.777774 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.277754982 +0000 UTC m=+171.785925786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: W0128 15:21:20.853817 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod854de0fc_bfab_4d4d_9931_b84561234f71.slice/crio-de62fbe4037b1ede8c829c399b8607c7e9ad2cfbe947ef12fe8a1ad068c3ab4e WatchSource:0}: Error finding container de62fbe4037b1ede8c829c399b8607c7e9ad2cfbe947ef12fe8a1ad068c3ab4e: Status 404 returned error can't find the container with id de62fbe4037b1ede8c829c399b8607c7e9ad2cfbe947ef12fe8a1ad068c3ab4e Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.884676 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.884904 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.384873241 +0000 UTC m=+171.893044045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.884951 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.885302 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.385290493 +0000 UTC m=+171.893461287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.926293 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" event={"ID":"e4b1709f-307f-47d0-8648-125e2514c80e","Type":"ContainerStarted","Data":"1c8afc77c7e68866b80c94cdf59d386aa2a9748d272fc12b5304975aebcb58dc"} Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.930072 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.936633 4656 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qvpr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.936767 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" podUID="e4b1709f-307f-47d0-8648-125e2514c80e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.962716 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" event={"ID":"8be343e6-2e63-426f-95cc-06f64f7417cc","Type":"ContainerStarted","Data":"71d182287b788f87ff10ed63e21bfc3c2bd149cbc77d7f9ca171c01cc90db7ee"} Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.975610 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" podStartSLOduration=133.975592138 podStartE2EDuration="2m13.975592138s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.974902719 +0000 UTC m=+171.483073523" watchObservedRunningTime="2026-01-28 15:21:20.975592138 +0000 UTC m=+171.483762942" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.990797 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.991115 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.491096594 +0000 UTC m=+171.999267388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.047715 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" event={"ID":"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30","Type":"ContainerStarted","Data":"5138d54fc7537701a7c5ffa731096fa8969cc6fe7bb3ca7060f2413eca99bb60"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.062670 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" event={"ID":"585af9c8-b122-49d3-8640-3dc5fb1613ab","Type":"ContainerStarted","Data":"78b47b7183e7eda72ddade2e1c4a26af07b8b155c25b30e05133914f6c085165"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.062720 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" event={"ID":"585af9c8-b122-49d3-8640-3dc5fb1613ab","Type":"ContainerStarted","Data":"0bd41a410018df5e926d445f57e7f74c7045ed42299b9abc86fc3b13ea531745"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.066508 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" event={"ID":"50ad627b-637f-4763-96c1-4c1beb352c70","Type":"ContainerStarted","Data":"79341d3335196fee4983755fab00ce18943dc4fd5779dc86a40aec7f62882ca8"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.067681 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fzfxb" event={"ID":"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f","Type":"ContainerStarted","Data":"191d80c0b580a061b613a915363bd4782cfd84d5cc4b1be7196124b809f57cc8"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.092140 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.092614 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.592596101 +0000 UTC m=+172.100766905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.124732 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" event={"ID":"c96c02d3-be15-47e5-a4bd-e65644751b10","Type":"ContainerStarted","Data":"c12c10b63d65aac5e1c73df108aa8d8aecd8217e8b9ca6647ef9e26b472d3491"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.129413 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fzfxb" podStartSLOduration=8.129387069 podStartE2EDuration="8.129387069s" podCreationTimestamp="2026-01-28 15:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:21.124077856 +0000 UTC m=+171.632248660" watchObservedRunningTime="2026-01-28 15:21:21.129387069 +0000 UTC m=+171.637557873" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.153332 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" event={"ID":"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428","Type":"ContainerStarted","Data":"e7a388047ae6559983f6c0d263656f61087cd167de19ccfd97543f5d6c14d8ed"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.184442 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" podStartSLOduration=134.18441907 podStartE2EDuration="2m14.18441907s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:21.179654223 +0000 UTC m=+171.687825027" watchObservedRunningTime="2026-01-28 15:21:21.18441907 +0000 UTC m=+171.692589874" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.223427 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.224961 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.724942965 +0000 UTC m=+172.233113769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.283838 4656 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-66pz7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.283883 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.284159 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" event={"ID":"25578c16-69f7-48c0-8a44-040950b9b8a1","Type":"ContainerStarted","Data":"8b5651c9577faa4a2a76aaa78d0b3751ec036612cc4cb26724879ce32d32d8f2"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.284204 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" event={"ID":"25578c16-69f7-48c0-8a44-040950b9b8a1","Type":"ContainerStarted","Data":"1e258fe47c6a006d11d349c174bef29feb367cacb2c7876fb9105c8ca262268d"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.284217 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"f7966fca94c13ecad508e9989912c019094f7ff5c7625b996d9d7524907e4a42"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.284226 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerStarted","Data":"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.303072 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" event={"ID":"7f9360e3-7265-42ec-b104-d62ab6ec66f4","Type":"ContainerStarted","Data":"5d4ee4911f1ddfb54dd51b7df7399797ec6c0ec648b61bf998307bbb48e3c1db"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.314984 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" event={"ID":"39d8752e-2237-4115-b66c-a9afc736dffe","Type":"ContainerStarted","Data":"a51e5a114f6923b38afae7621e06fcf6d11d5519f48c3207f94d9dc64f5ccb33"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.323393 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" event={"ID":"e69663d4-ff3d-4991-804a-cf8d53a4c3ff","Type":"ContainerStarted","Data":"462599b582f5692efaa8c3917147798f2e336595316933b83b88f70d4dc57b35"} Jan 28 15:21:21 crc kubenswrapper[4656]: W0128 15:21:21.323519 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-09136cbfeb95e9fc4f334503a475c8bab1cd4bc5053014a68c636f1741a3adbd WatchSource:0}: Error finding container 09136cbfeb95e9fc4f334503a475c8bab1cd4bc5053014a68c636f1741a3adbd: Status 404 returned error can't find the container with id 09136cbfeb95e9fc4f334503a475c8bab1cd4bc5053014a68c636f1741a3adbd Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.327372 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.328628 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.828614854 +0000 UTC m=+172.336785658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.348303 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" event={"ID":"390f2e18-1f51-46da-93cf-da6b0d524b0d","Type":"ContainerStarted","Data":"e06b6a69c3004d64fea7e46c4d54ea1a870ee797742cbc9b158b473a61cc59a8"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.350153 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" event={"ID":"4d96cdec-34f1-44e2-9380-40475a720b31","Type":"ContainerStarted","Data":"6b567df716488861abda19cfde6a0636d183de50de321377ec3111ed56b7a17d"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.384059 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" event={"ID":"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4","Type":"ContainerStarted","Data":"569a67924dc0423418b224e60ea5b0e701d514611e9881ca6a7f2dc92dabeb50"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.384949 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.438018 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.438518 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.938450791 +0000 UTC m=+172.446621595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.439416 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.440351 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" event={"ID":"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916","Type":"ContainerStarted","Data":"006e3ffcc8aed4941813fc81e75331ea15a04953fdfb85cb59fdeab0b476b190"} Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.442777 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.942763865 +0000 UTC m=+172.450934669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.497576 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" event={"ID":"d8d474a0-0bf1-467d-ab77-4b94a17f7881","Type":"ContainerStarted","Data":"44b4fa9e42e4d262fbcc0a9185da187df76399889a9b594190ea2609a576170c"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.510465 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" event={"ID":"c50cc4de-dd25-4337-a532-3384d5a87626","Type":"ContainerStarted","Data":"b6ef49c0366fd30b3220d08592ed50831f49c59a0192b23fe66ac253689a6e5d"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.522194 4656 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9hdsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.522248 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" podUID="ec3667a0-3a61-42f4-85cd-d9e7eb774fd4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.541290 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.541669 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.041647907 +0000 UTC m=+172.549818711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.650029 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.651830 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.151815603 +0000 UTC m=+172.659986407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.752277 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.752482 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.252450655 +0000 UTC m=+172.760621459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.752742 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.753229 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.253214707 +0000 UTC m=+172.761385511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.854272 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.854625 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.354610931 +0000 UTC m=+172.862781735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.900233 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.907118 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:21 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:21 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:21 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.907155 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.956912 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.957360 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.457345914 +0000 UTC m=+172.965516718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.058547 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.058822 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.5588065 +0000 UTC m=+173.066977294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.161022 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.161384 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.661370127 +0000 UTC m=+173.169540931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.264279 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.264651 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.764628365 +0000 UTC m=+173.272799169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.264857 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.265217 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.765209342 +0000 UTC m=+173.273380146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.367977 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.368451 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.868434738 +0000 UTC m=+173.376605542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.418657 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.471116 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.472555 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.972542421 +0000 UTC m=+173.480713225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.515748 4656 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jpgn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.515820 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.562578 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" event={"ID":"390f2e18-1f51-46da-93cf-da6b0d524b0d","Type":"ContainerStarted","Data":"2427060bc22a7a2c3cbdc04fa905915dd99eada2e9402ee77171e61ec043a53b"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.574972 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.576897 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.076876979 +0000 UTC m=+173.585047793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.603471 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jgtrg" event={"ID":"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938","Type":"ContainerStarted","Data":"ee0ff65b029e86693fa9cc594e9988ae98f70684291f645867a573844bbfbe5e"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.603527 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jgtrg" event={"ID":"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938","Type":"ContainerStarted","Data":"319f8803786fa85989a749b450f5e4f43e414499c7cbd7006a56cebe692b25c6"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.633884 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"85f7d091566a3d3b830b63d15a60ebe6ef417e94f6ca3320950ea19c5c587c5f"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.633930 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"09136cbfeb95e9fc4f334503a475c8bab1cd4bc5053014a68c636f1741a3adbd"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.634466 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.679135 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.680489 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.180472117 +0000 UTC m=+173.688642911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.682184 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" event={"ID":"50ad627b-637f-4763-96c1-4c1beb352c70","Type":"ContainerStarted","Data":"50356fe90f10b02ee6f2f3e97a6f34d34ffe9764a56a4ac5834da65785f675c7"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.682226 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" event={"ID":"50ad627b-637f-4763-96c1-4c1beb352c70","Type":"ContainerStarted","Data":"3699701108a2dd7eac0729a5347663b8030b7f697b1d552eb5b050d536a9ac54"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.682880 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.696236 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bd1b7a35c96020919b0abc5141f1aa1a830266231d673e2f939a4f09a82046cd"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.696286 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1c5e2bb2ad9bb3bb6d579fdef0a1cda31d7e0e25cb57f404fa359cf0e592bafd"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.732470 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" event={"ID":"854de0fc-bfab-4d4d-9931-b84561234f71","Type":"ContainerStarted","Data":"74af8a954515c5f90d6338105118243f152ce5645fbf58b6274ebf6d47725d98"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.732511 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" event={"ID":"854de0fc-bfab-4d4d-9931-b84561234f71","Type":"ContainerStarted","Data":"de62fbe4037b1ede8c829c399b8607c7e9ad2cfbe947ef12fe8a1ad068c3ab4e"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.732716 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.734522 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" event={"ID":"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30","Type":"ContainerStarted","Data":"2b0a5de010c4db3b7a484a39aa308dddc2f8add652592a6831edf431cf107c5b"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.735860 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"90d4b7e1f8ed086cfdd394059106600cbb6e1094f9dd37a820a2a62af2d50f12"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.735880 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4fdc909d2a176936ea42c47b65103741119040f5aa1bf860f5cd39d83b29c7ce"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.738183 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" event={"ID":"585af9c8-b122-49d3-8640-3dc5fb1613ab","Type":"ContainerStarted","Data":"9c12df696a51f2e3474b678ad4f0af48e2358c5f5fcf0c7bd1ce3d741ee7098e"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.739752 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" event={"ID":"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916","Type":"ContainerStarted","Data":"84b271ee617c17b33eddfa22de05223bfee8268debf4315845201a32c6ffe279"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.741690 4656 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-879dh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.741728 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" podUID="854de0fc-bfab-4d4d-9931-b84561234f71" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.742136 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" event={"ID":"c50cc4de-dd25-4337-a532-3384d5a87626","Type":"ContainerStarted","Data":"f8cf3e2f02a8adc27690493735a08ec17bba5663accabe567959ff41a13aa583"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.756296 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerStarted","Data":"7edc8ce80170fb625b340ebafd351b32972cf8b8df5461abd8c2d61d65e5ac65"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.756534 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerStarted","Data":"bd4bd9dfb1b4432e1ef6472981e4d256f892f8132aca7a1e3a36d9ba43d88ca5"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.769427 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" event={"ID":"39d8752e-2237-4115-b66c-a9afc736dffe","Type":"ContainerStarted","Data":"165dc058142988df212c57bce3d513d2e48f4fd66ecb8b364f8a450eba17a7cd"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.769474 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" event={"ID":"39d8752e-2237-4115-b66c-a9afc736dffe","Type":"ContainerStarted","Data":"c71633dcf0ca06c5a43ba83df489e09335fb57f25535cc9c4a512daf8db1a3a1"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.774071 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" event={"ID":"d8d474a0-0bf1-467d-ab77-4b94a17f7881","Type":"ContainerStarted","Data":"dd11706f89828f6e170d52ed4ad520fc7897069f7694eeaa5ae551cf364dbd25"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.774109 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" event={"ID":"d8d474a0-0bf1-467d-ab77-4b94a17f7881","Type":"ContainerStarted","Data":"dff9ebfc1952e8b2fe521ef41e8c4466736015885dcee3079e2874673c0317f6"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.775726 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" event={"ID":"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428","Type":"ContainerStarted","Data":"6184fea5b3a63f86696cd77a9ac0ea91e06ad7dae86e086c92f6046820633bab"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.781648 4656 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-66pz7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.781709 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.781764 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" event={"ID":"e69663d4-ff3d-4991-804a-cf8d53a4c3ff","Type":"ContainerStarted","Data":"04e01955463a69d93a67e7bc1115f506b452a1e88766b311bb4eecc1a0486190"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.782102 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.785618 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.285596248 +0000 UTC m=+173.793767042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.786579 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.788014 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.288007017 +0000 UTC m=+173.796177821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.834142 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.888916 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.890669 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.390648597 +0000 UTC m=+173.898819391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.917383 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:22 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:22 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:22 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.917461 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.919411 4656 patch_prober.go:28] interesting pod/apiserver-76f77b778f-j8tlz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]log ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]etcd ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/max-in-flight-filter ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 15:21:22 crc kubenswrapper[4656]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 15:21:22 crc kubenswrapper[4656]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-startinformers ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 15:21:22 crc kubenswrapper[4656]: livez check failed Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.919497 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" podUID="57598ddc-f214-47b1-bdef-10bdf94607d1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.948050 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" podStartSLOduration=135.948023116 podStartE2EDuration="2m15.948023116s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:22.933815408 +0000 UTC m=+173.441986212" watchObservedRunningTime="2026-01-28 15:21:22.948023116 +0000 UTC m=+173.456193920" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.993563 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.996774 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.496749147 +0000 UTC m=+174.004919951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.084191 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" podStartSLOduration=136.084147968 podStartE2EDuration="2m16.084147968s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.083594783 +0000 UTC m=+173.591765607" watchObservedRunningTime="2026-01-28 15:21:23.084147968 +0000 UTC m=+173.592318782" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.094513 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.094900 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.594881777 +0000 UTC m=+174.103052581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.195750 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.196077 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.696061315 +0000 UTC m=+174.204232119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.241672 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" podStartSLOduration=137.241643315 podStartE2EDuration="2m17.241643315s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.234625063 +0000 UTC m=+173.742795867" watchObservedRunningTime="2026-01-28 15:21:23.241643315 +0000 UTC m=+173.749814119" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.297313 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.297537 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.79750191 +0000 UTC m=+174.305672714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.340914 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" podStartSLOduration=136.340888577 podStartE2EDuration="2m16.340888577s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.338820028 +0000 UTC m=+173.846990842" watchObservedRunningTime="2026-01-28 15:21:23.340888577 +0000 UTC m=+173.849059381" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.343150 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" podStartSLOduration=136.343138202 podStartE2EDuration="2m16.343138202s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.282140919 +0000 UTC m=+173.790311723" watchObservedRunningTime="2026-01-28 15:21:23.343138202 +0000 UTC m=+173.851309006" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.389966 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" podStartSLOduration=137.389945537 podStartE2EDuration="2m17.389945537s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.389917146 +0000 UTC m=+173.898087950" watchObservedRunningTime="2026-01-28 15:21:23.389945537 +0000 UTC m=+173.898116341" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.399070 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.399681 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.899656636 +0000 UTC m=+174.407827440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.459225 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qh5kz" podStartSLOduration=136.459195728 podStartE2EDuration="2m16.459195728s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.450798526 +0000 UTC m=+173.958969340" watchObservedRunningTime="2026-01-28 15:21:23.459195728 +0000 UTC m=+173.967366542" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.500907 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.501142 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.001106132 +0000 UTC m=+174.509276946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.501256 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.501557 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.001544445 +0000 UTC m=+174.509715249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.547766 4656 csr.go:261] certificate signing request csr-qq48q is approved, waiting to be issued Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.561422 4656 csr.go:257] certificate signing request csr-qq48q is issued Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.602190 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.602446 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.102405374 +0000 UTC m=+174.610576188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.602543 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.603017 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.103007161 +0000 UTC m=+174.611177965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.606081 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" podStartSLOduration=136.606062599 podStartE2EDuration="2m16.606062599s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.604522894 +0000 UTC m=+174.112693718" watchObservedRunningTime="2026-01-28 15:21:23.606062599 +0000 UTC m=+174.114233403" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.607133 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" podStartSLOduration=136.607127619 podStartE2EDuration="2m16.607127619s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.508274758 +0000 UTC m=+174.016445562" watchObservedRunningTime="2026-01-28 15:21:23.607127619 +0000 UTC m=+174.115298423" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.704137 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.704459 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.204442926 +0000 UTC m=+174.712613730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.708887 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" podStartSLOduration=136.708861573 podStartE2EDuration="2m16.708861573s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.667655019 +0000 UTC m=+174.175825823" watchObservedRunningTime="2026-01-28 15:21:23.708861573 +0000 UTC m=+174.217032397" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.738142 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" podStartSLOduration=135.738115134 podStartE2EDuration="2m15.738115134s" podCreationTimestamp="2026-01-28 15:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.709924444 +0000 UTC m=+174.218095258" watchObservedRunningTime="2026-01-28 15:21:23.738115134 +0000 UTC m=+174.246285938" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.763463 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" podStartSLOduration=138.763448712 podStartE2EDuration="2m18.763448712s" podCreationTimestamp="2026-01-28 15:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.740189614 +0000 UTC m=+174.248360428" watchObservedRunningTime="2026-01-28 15:21:23.763448712 +0000 UTC m=+174.271619516" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.781866 4656 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jpgn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.782230 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.782565 4656 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9hdsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.782628 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" podUID="ec3667a0-3a61-42f4-85cd-d9e7eb774fd4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.783406 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"047624a09e8965505784965d1c922c05ab8cc3345ea2d1d455281b6afecd6680"} Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.784777 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jgtrg" event={"ID":"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938","Type":"ContainerStarted","Data":"15db3dae36d477e53fb31eb72e460d2ed94442d08f4c61f3d1f1990808794d5e"} Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.785023 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.788619 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerStarted","Data":"14efe49be0f1cf209a9694e16a2c7e25612b1e5546c3a42f5bd715797b6c45e7"} Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.815092 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.820041 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.320018788 +0000 UTC m=+174.828189632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.827556 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" podStartSLOduration=136.827532484 podStartE2EDuration="2m16.827532484s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.824783165 +0000 UTC m=+174.332953989" watchObservedRunningTime="2026-01-28 15:21:23.827532484 +0000 UTC m=+174.335703288" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.875102 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.887240 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" podStartSLOduration=136.887213499 podStartE2EDuration="2m16.887213499s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.885547421 +0000 UTC m=+174.393718225" watchObservedRunningTime="2026-01-28 15:21:23.887213499 +0000 UTC m=+174.395384313" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.887350 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" podStartSLOduration=136.887345673 podStartE2EDuration="2m16.887345673s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.849537716 +0000 UTC m=+174.357708520" watchObservedRunningTime="2026-01-28 15:21:23.887345673 +0000 UTC m=+174.395516477" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.897752 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:23 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:23 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:23 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.897833 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.920465 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.921204 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.420764133 +0000 UTC m=+174.928934937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.921324 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.922389 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.42237414 +0000 UTC m=+174.930544954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.022678 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.023129 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.523107645 +0000 UTC m=+175.031278449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.060810 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" podStartSLOduration=136.060783958 podStartE2EDuration="2m16.060783958s" podCreationTimestamp="2026-01-28 15:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:24.058910414 +0000 UTC m=+174.567081218" watchObservedRunningTime="2026-01-28 15:21:24.060783958 +0000 UTC m=+174.568954762" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.062579 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" podStartSLOduration=137.062565629 podStartE2EDuration="2m17.062565629s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.991723833 +0000 UTC m=+174.499894637" watchObservedRunningTime="2026-01-28 15:21:24.062565629 +0000 UTC m=+174.570736443" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.124511 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.124867 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.624854359 +0000 UTC m=+175.133025163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.144716 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jgtrg" podStartSLOduration=11.144696539 podStartE2EDuration="11.144696539s" podCreationTimestamp="2026-01-28 15:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:24.136934236 +0000 UTC m=+174.645105040" watchObservedRunningTime="2026-01-28 15:21:24.144696539 +0000 UTC m=+174.652867343" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.225970 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.226194 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.72613455 +0000 UTC m=+175.234305374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.226337 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.226764 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.726743617 +0000 UTC m=+175.234914421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.319265 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.320539 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.324268 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.326967 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.327203 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.827154203 +0000 UTC m=+175.335325017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.327514 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.327869 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.827853713 +0000 UTC m=+175.336024527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.355618 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.429435 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.429771 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.929736422 +0000 UTC m=+175.437907226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.430373 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.430474 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.430565 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.430684 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.431057 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.931044269 +0000 UTC m=+175.439215073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.481187 4656 patch_prober.go:28] interesting pod/apiserver-76f77b778f-j8tlz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]log ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]etcd ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/max-in-flight-filter ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 15:21:24 crc kubenswrapper[4656]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-startinformers ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 15:21:24 crc kubenswrapper[4656]: livez check failed Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.481951 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" podUID="57598ddc-f214-47b1-bdef-10bdf94607d1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.510974 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.512260 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: W0128 15:21:24.524724 4656 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.524784 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.531511 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.531702 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.031663911 +0000 UTC m=+175.539834725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.531750 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.531921 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.531998 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.532069 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.532394 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.532554 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.032533736 +0000 UTC m=+175.540704530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.532684 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.546099 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.560328 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.564311 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 15:16:23 +0000 UTC, rotation deadline is 2026-10-14 07:46:28.543650988 +0000 UTC Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.564572 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6208h25m3.979083081s for next certificate rotation Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.635603 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.637373 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.637611 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.637643 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.637664 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.637906 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.137887234 +0000 UTC m=+175.646058038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.648320 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.738689 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739005 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739041 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739086 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739277 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.739461 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.239445613 +0000 UTC m=+175.747616477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739531 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.756793 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.757760 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.807480 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.807538 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.807502 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.807945 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.826434 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"2406d77731cc6ca8f0db660ba1c05448b07d90cdfcff6256ad01a1a76fbffb6e"} Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.841007 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.841219 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.841246 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.841285 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.841446 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.341428104 +0000 UTC m=+175.849598908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.854454 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.861908 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.901424 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:24 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:24 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:24 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.901487 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.942291 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.942368 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.942396 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.942440 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.943915 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.944322 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.944900 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.444883597 +0000 UTC m=+175.953054401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.971977 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.972865 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.003993 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.009547 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.046811 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.047141 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.047228 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.047262 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.047357 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.547340042 +0000 UTC m=+176.055510846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.089623 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.148870 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.148922 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.148953 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.149001 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.150599 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.650577599 +0000 UTC m=+176.158748403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.151093 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.153445 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.235079 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.255448 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.258550 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.758517011 +0000 UTC m=+176.266687815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.358244 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.358677 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.858658388 +0000 UTC m=+176.366829192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.459969 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.460681 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.96066217 +0000 UTC m=+176.468832974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.478335 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.478376 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.493920 4656 patch_prober.go:28] interesting pod/console-f9d7485db-jrkdc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.494208 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jrkdc" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.538737 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.547303 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.549507 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.565502 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.566830 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.066812971 +0000 UTC m=+176.574983775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.671867 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.672308 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.172288752 +0000 UTC m=+176.680459556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.725971 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.774049 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.774557 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.274531961 +0000 UTC m=+176.782702775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.853408 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerStarted","Data":"c98421a3a0c42729a0b7a3850570dd8ac89ef8e57c178261115d715189f4f351"} Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.860504 4656 generic.go:334] "Generic (PLEG): container finished" podID="25578c16-69f7-48c0-8a44-040950b9b8a1" containerID="8b5651c9577faa4a2a76aaa78d0b3751ec036612cc4cb26724879ce32d32d8f2" exitCode=0 Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.860603 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" event={"ID":"25578c16-69f7-48c0-8a44-040950b9b8a1","Type":"ContainerDied","Data":"8b5651c9577faa4a2a76aaa78d0b3751ec036612cc4cb26724879ce32d32d8f2"} Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.878989 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.879357 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.379340333 +0000 UTC m=+176.887511137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.910540 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:25 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:25 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:25 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.910609 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.983151 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.984730 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.484707651 +0000 UTC m=+176.992878515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.084917 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.085236 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.5852161 +0000 UTC m=+177.093386904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.187084 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.187849 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.687832859 +0000 UTC m=+177.196003673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.291899 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.292260 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.79224444 +0000 UTC m=+177.300415244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.292480 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.312603 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.313632 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.329688 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.331122 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.353990 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.392986 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.393066 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.393101 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.393137 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.399334 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.899314998 +0000 UTC m=+177.407485882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.494321 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.494532 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.494645 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.494681 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.495153 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.495561 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.995541663 +0000 UTC m=+177.503712467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.495649 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.582235 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.597081 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.597468 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.097455452 +0000 UTC m=+177.605626256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.668497 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.687947 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.699823 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.700354 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.200334839 +0000 UTC m=+177.708505643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.713858 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.724148 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.732199 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.778526 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.801774 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.802067 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.802371 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.802502 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.803030 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.30300824 +0000 UTC m=+177.811179074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.887793 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.897915 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.904700 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.905251 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.905287 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.905395 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.905922 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.405900277 +0000 UTC m=+177.914071081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.906807 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.907109 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.912518 4656 generic.go:334] "Generic (PLEG): container finished" podID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerID="6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a" exitCode=0 Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.912680 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerDied","Data":"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a"} Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.915560 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerStarted","Data":"97e67bc73ce17661993af527d6be763d8d88936ad8acbd056a6ba60090f1140e"} Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.926535 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerStarted","Data":"924a066d9ecea51a84946c62ff66f97e3aaf6278887d64d1b9553585e9692514"} Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.932577 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.934284 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:26 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:26 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:26 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.934341 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.942974 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.966038 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"a3d37359b98e005965df7bde715526e78c09160603d980e647ece656e9f1f863"} Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.007497 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.011923 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.511908334 +0000 UTC m=+178.020079138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.115495 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.119881 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.120506 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.620485784 +0000 UTC m=+178.128656588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.223960 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.224567 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.724555325 +0000 UTC m=+178.232726129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.267720 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.325362 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.325742 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.825722493 +0000 UTC m=+178.333893297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.431361 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.431831 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.931813132 +0000 UTC m=+178.439983936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.434120 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.533749 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") pod \"25578c16-69f7-48c0-8a44-040950b9b8a1\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.533841 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") pod \"25578c16-69f7-48c0-8a44-040950b9b8a1\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.533984 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.534072 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") pod \"25578c16-69f7-48c0-8a44-040950b9b8a1\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.534547 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.034514204 +0000 UTC m=+178.542685048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.535063 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "25578c16-69f7-48c0-8a44-040950b9b8a1" (UID: "25578c16-69f7-48c0-8a44-040950b9b8a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.556058 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "25578c16-69f7-48c0-8a44-040950b9b8a1" (UID: "25578c16-69f7-48c0-8a44-040950b9b8a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.557618 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9" (OuterVolumeSpecName: "kube-api-access-j9pk9") pod "25578c16-69f7-48c0-8a44-040950b9b8a1" (UID: "25578c16-69f7-48c0-8a44-040950b9b8a1"). InnerVolumeSpecName "kube-api-access-j9pk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.591519 4656 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.637068 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.637206 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.637220 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.637229 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.637492 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.137477723 +0000 UTC m=+178.645648527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.716592 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.738182 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.738384 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.238360702 +0000 UTC m=+178.746531516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.738474 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.738978 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.23896742 +0000 UTC m=+178.747138224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.839509 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.839857 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.339816938 +0000 UTC m=+178.847987742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.840007 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.840401 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.340391845 +0000 UTC m=+178.848562649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.900800 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:27 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:27 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:27 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.900856 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.916967 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.917231 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25578c16-69f7-48c0-8a44-040950b9b8a1" containerName="collect-profiles" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.917273 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="25578c16-69f7-48c0-8a44-040950b9b8a1" containerName="collect-profiles" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.917439 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="25578c16-69f7-48c0-8a44-040950b9b8a1" containerName="collect-profiles" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.918273 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.922522 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941236 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.941457 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.441431779 +0000 UTC m=+178.949602583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941538 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941624 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941665 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941756 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.942120 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.442108988 +0000 UTC m=+178.950279792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.955180 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.002973 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerID="b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414" exitCode=0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.003110 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerDied","Data":"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.010918 4656 generic.go:334] "Generic (PLEG): container finished" podID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerID="cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98" exitCode=0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.011021 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerDied","Data":"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.016313 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.016666 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" event={"ID":"25578c16-69f7-48c0-8a44-040950b9b8a1","Type":"ContainerDied","Data":"1e258fe47c6a006d11d349c174bef29feb367cacb2c7876fb9105c8ca262268d"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.016884 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e258fe47c6a006d11d349c174bef29feb367cacb2c7876fb9105c8ca262268d" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.027221 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"d2e28b36c0eb1989052cb495e3bc543bf489515a0fd61fb01510c270a01d64cb"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.034885 4656 generic.go:334] "Generic (PLEG): container finished" podID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerID="daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050" exitCode=0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.035102 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerDied","Data":"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.035196 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerStarted","Data":"cf973c02c04ef3c79efcf712d2d128bb8313be52f3235da8828930c75b3c34ff"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.044448 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.044813 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.044895 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.044956 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.046126 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.546109947 +0000 UTC m=+179.054280751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.046663 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.048974 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.051267 4656 generic.go:334] "Generic (PLEG): container finished" podID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerID="142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8" exitCode=0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.051388 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerDied","Data":"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.051427 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerStarted","Data":"cf818feb09c0ccfd9e455faaffde6799b01a2c05ed95767814d1627d35d8c054"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.061516 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerStarted","Data":"7cabac754976a0ee49a3872bb0095429cd22f9300449f019606cabdd291f04c7"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.061568 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerStarted","Data":"675ef5c857ca65a9f9f28b7fc7e7d9027ae949f4a6695cc47226b6c49738e9b9"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.064831 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" podStartSLOduration=15.064797884 podStartE2EDuration="15.064797884s" podCreationTimestamp="2026-01-28 15:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:28.057315319 +0000 UTC m=+178.565486143" watchObservedRunningTime="2026-01-28 15:21:28.064797884 +0000 UTC m=+178.572968688" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.088132 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.147030 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.148187 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.64815418 +0000 UTC m=+179.156324974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.236899 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.247927 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.248110 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.748078592 +0000 UTC m=+179.256249406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.248194 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.248555 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.748547676 +0000 UTC m=+179.256718480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.305662 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.306795 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.319735 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.352682 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.353089 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.353137 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.353940 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.853909674 +0000 UTC m=+179.362080488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.354310 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.355611 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.356765 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.856744115 +0000 UTC m=+179.364914919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.452498 4656 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T15:21:27.591558923Z","Handler":null,"Name":""} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.457657 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.457948 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.458020 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.458038 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.459819 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.459846 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.459934 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.95991334 +0000 UTC m=+179.468084144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.480605 4656 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.480651 4656 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.496601 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.542347 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:21:28 crc kubenswrapper[4656]: W0128 15:21:28.550101 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6b1aae7_caaa_427d_8b07_705b02e81763.slice/crio-4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f WatchSource:0}: Error finding container 4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f: Status 404 returned error can't find the container with id 4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.559317 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.561379 4656 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.561405 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.589922 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.642705 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.660315 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.705310 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.739615 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.898853 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:28 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:28 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:28 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.899313 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.980232 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.092342 4656 generic.go:334] "Generic (PLEG): container finished" podID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerID="98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388" exitCode=0 Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.093577 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerDied","Data":"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388"} Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.093609 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerStarted","Data":"4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f"} Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.124582 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerStarted","Data":"212e83a1856ef8c98ccafed8d21edbd6de5e1d834be4a94f2e2855ab3ac7c760"} Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.136534 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerID="7cabac754976a0ee49a3872bb0095429cd22f9300449f019606cabdd291f04c7" exitCode=0 Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.137287 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerDied","Data":"7cabac754976a0ee49a3872bb0095429cd22f9300449f019606cabdd291f04c7"} Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.185526 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.297376 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:21:29 crc kubenswrapper[4656]: W0128 15:21:29.315392 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5823f5c7_fabe_4d4b_a3df_49349749b19e.slice/crio-54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327 WatchSource:0}: Error finding container 54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327: Status 404 returned error can't find the container with id 54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327 Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.479510 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.486492 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.898698 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:29 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:29 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:29 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.898791 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.185806 4656 generic.go:334] "Generic (PLEG): container finished" podID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerID="1c47ef6d923405e3cbd5ebf98dc7b072df4b7f29ebc5c89b0c18a12b52617312" exitCode=0 Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.185894 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerDied","Data":"1c47ef6d923405e3cbd5ebf98dc7b072df4b7f29ebc5c89b0c18a12b52617312"} Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.188579 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" event={"ID":"5823f5c7-fabe-4d4b-a3df-49349749b19e","Type":"ContainerStarted","Data":"54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327"} Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.236271 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.238810 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.250820 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.251443 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.254743 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.301153 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.301271 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.435665 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.435935 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.436320 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.483011 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.600669 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.897248 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:30 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:30 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:30 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.897343 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.265421 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" event={"ID":"5823f5c7-fabe-4d4b-a3df-49349749b19e","Type":"ContainerStarted","Data":"e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967"} Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.270111 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.310800 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" podStartSLOduration=144.310780105 podStartE2EDuration="2m24.310780105s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:31.304949087 +0000 UTC m=+181.813119891" watchObservedRunningTime="2026-01-28 15:21:31.310780105 +0000 UTC m=+181.818950909" Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.396641 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:21:31 crc kubenswrapper[4656]: W0128 15:21:31.466337 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda9bf2ac2_93ee_4cc9_b631_0c45a1326ad3.slice/crio-6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd WatchSource:0}: Error finding container 6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd: Status 404 returned error can't find the container with id 6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.898648 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:31 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:31 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:31 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.898739 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.352662 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3","Type":"ContainerStarted","Data":"6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd"} Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.898670 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:32 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:32 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:32 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.898753 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.958050 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.959609 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.962463 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.962488 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.963911 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.008758 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.008861 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.117974 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.118091 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.118254 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.159951 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.303057 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.397415 4656 generic.go:334] "Generic (PLEG): container finished" podID="a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" containerID="dcf5a8e47139b85a5597a192fd620df6c5b8cfb5364a35cb7f5372a9d06ff97e" exitCode=0 Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.397467 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3","Type":"ContainerDied","Data":"dcf5a8e47139b85a5597a192fd620df6c5b8cfb5364a35cb7f5372a9d06ff97e"} Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.887278 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.897236 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:33 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:33 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:33 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.897316 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.439711 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121","Type":"ContainerStarted","Data":"84c0329afaccfa22e761b86fa3fc6e73cbb668de3176eaf9528f58bd7bc3e101"} Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.817372 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.817436 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.817786 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.817848 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.871601 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.901475 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:34 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:34 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:34 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.901545 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.075027 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.167371 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") pod \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.167520 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") pod \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.167725 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" (UID: "a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.167864 4656 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.176733 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" (UID: "a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.269278 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.482636 4656 patch_prober.go:28] interesting pod/console-f9d7485db-jrkdc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.482716 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jrkdc" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.543272 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3","Type":"ContainerDied","Data":"6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd"} Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.543345 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.543264 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.546677 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121","Type":"ContainerStarted","Data":"f5d376ff74ccc7c835ab3d99bdbe7f68d5f51587b1e1eb01388c2f6b4b6034e4"} Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.611379 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.611357015 podStartE2EDuration="3.611357015s" podCreationTimestamp="2026-01-28 15:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:35.60907931 +0000 UTC m=+186.117250134" watchObservedRunningTime="2026-01-28 15:21:35.611357015 +0000 UTC m=+186.119527819" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.897960 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:35 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:35 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:35 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.898030 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.979905 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.982481 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.015355 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.019935 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.028083 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.580104 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jcx9v_0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9/cluster-samples-operator/0.log" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.580632 4656 generic.go:334] "Generic (PLEG): container finished" podID="0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9" containerID="7edc8ce80170fb625b340ebafd351b32972cf8b8df5461abd8c2d61d65e5ac65" exitCode=2 Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.580840 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerDied","Data":"7edc8ce80170fb625b340ebafd351b32972cf8b8df5461abd8c2d61d65e5ac65"} Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.582245 4656 scope.go:117] "RemoveContainer" containerID="7edc8ce80170fb625b340ebafd351b32972cf8b8df5461abd8c2d61d65e5ac65" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.600622 4656 generic.go:334] "Generic (PLEG): container finished" podID="74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" containerID="f5d376ff74ccc7c835ab3d99bdbe7f68d5f51587b1e1eb01388c2f6b4b6034e4" exitCode=0 Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.600701 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121","Type":"ContainerDied","Data":"f5d376ff74ccc7c835ab3d99bdbe7f68d5f51587b1e1eb01388c2f6b4b6034e4"} Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.748593 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bmj6r"] Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.908379 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:36 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:36 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:36 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.908877 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.636709 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" event={"ID":"11320542-8463-40db-8981-632be2bd5a48","Type":"ContainerStarted","Data":"c36a75183fe84628d125ace03e17ed52ec119308ab220192ac09ebfaa96dda40"} Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.692624 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jcx9v_0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9/cluster-samples-operator/0.log" Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.693380 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerStarted","Data":"f25ffe50d07dddcbf86b8873e292434a79dc1c8fd111f1caef1f6b655f90d681"} Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.896074 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:37 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:37 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:37 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.896952 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.097018 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.130792 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") pod \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.130891 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") pod \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.132147 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" (UID: "74e9f8ac-c1b4-420c-b2b0-b08c05ae8121"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.174516 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" (UID: "74e9f8ac-c1b4-420c-b2b0-b08c05ae8121"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.232892 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.232934 4656 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.733873 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" event={"ID":"11320542-8463-40db-8981-632be2bd5a48","Type":"ContainerStarted","Data":"a9342fcc2dba6f40a59075419b212328f9f3c4179cdf65e552f51f43de4c98e8"} Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.780302 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121","Type":"ContainerDied","Data":"84c0329afaccfa22e761b86fa3fc6e73cbb668de3176eaf9528f58bd7bc3e101"} Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.780366 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c0329afaccfa22e761b86fa3fc6e73cbb668de3176eaf9528f58bd7bc3e101" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.780327 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.898570 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:38 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:38 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:38 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.898941 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:39 crc kubenswrapper[4656]: I0128 15:21:39.896885 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:39 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:39 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:39 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:39 crc kubenswrapper[4656]: I0128 15:21:39.896982 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:40 crc kubenswrapper[4656]: I0128 15:21:40.895904 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:40 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:40 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:40 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:40 crc kubenswrapper[4656]: I0128 15:21:40.896323 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.264320 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.264408 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.337958 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.897284 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.903330 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.804971 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.805224 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.805387 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.805322 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.806292 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.807030 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.807059 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.807029 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63"} pod="openshift-console/downloads-7954f5f757-zrrnn" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.808696 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" containerID="cri-o://60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63" gracePeriod=2 Jan 28 15:21:45 crc kubenswrapper[4656]: I0128 15:21:45.490956 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:45 crc kubenswrapper[4656]: I0128 15:21:45.494726 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:45 crc kubenswrapper[4656]: I0128 15:21:45.887118 4656 generic.go:334] "Generic (PLEG): container finished" podID="d903ef3d-1544-4343-b254-15939a05fec0" containerID="60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63" exitCode=0 Jan 28 15:21:45 crc kubenswrapper[4656]: I0128 15:21:45.887225 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrrnn" event={"ID":"d903ef3d-1544-4343-b254-15939a05fec0","Type":"ContainerDied","Data":"60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63"} Jan 28 15:21:48 crc kubenswrapper[4656]: I0128 15:21:48.747910 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:54 crc kubenswrapper[4656]: I0128 15:21:54.805959 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:54 crc kubenswrapper[4656]: I0128 15:21:54.806543 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:56 crc kubenswrapper[4656]: I0128 15:21:56.723952 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:59 crc kubenswrapper[4656]: I0128 15:21:59.132792 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:22:00 crc kubenswrapper[4656]: I0128 15:22:00.995121 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" event={"ID":"11320542-8463-40db-8981-632be2bd5a48","Type":"ContainerStarted","Data":"284a932738339e158c1fb021a3fc51f5de0c01c0052089e71babe0ff6a3f5050"} Jan 28 15:22:01 crc kubenswrapper[4656]: I0128 15:22:01.019795 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bmj6r" podStartSLOduration=175.019772037 podStartE2EDuration="2m55.019772037s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:01.017324748 +0000 UTC m=+211.525495562" watchObservedRunningTime="2026-01-28 15:22:01.019772037 +0000 UTC m=+211.527942841" Jan 28 15:22:04 crc kubenswrapper[4656]: I0128 15:22:04.804134 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:04 crc kubenswrapper[4656]: I0128 15:22:04.804222 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743226 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:22:07 crc kubenswrapper[4656]: E0128 15:22:07.743640 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743657 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: E0128 15:22:07.743666 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743673 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743828 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743843 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.744343 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.750069 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.751717 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.752512 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.819638 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.819905 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.921513 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.921659 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.921699 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.945404 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:08 crc kubenswrapper[4656]: I0128 15:22:08.060047 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:09 crc kubenswrapper[4656]: E0128 15:22:09.693946 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:22:09 crc kubenswrapper[4656]: E0128 15:22:09.694331 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkjf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8dc6j_openshift-marketplace(a6b1aae7-caaa-427d-8b07-705b02e81763): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:09 crc kubenswrapper[4656]: E0128 15:22:09.696139 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8dc6j" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" Jan 28 15:22:10 crc kubenswrapper[4656]: E0128 15:22:10.709134 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8dc6j" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" Jan 28 15:22:10 crc kubenswrapper[4656]: E0128 15:22:10.865112 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:22:10 crc kubenswrapper[4656]: E0128 15:22:10.865341 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj8vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-szsbj_openshift-marketplace(d6812603-edd0-45f4-b2b3-6d9ece7e98c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:10 crc kubenswrapper[4656]: E0128 15:22:10.866537 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-szsbj" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.264774 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.265136 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.939656 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.940346 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.952319 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.020992 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.021419 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.021563 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.127013 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.127106 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.127270 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.128846 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.129129 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.149432 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.336337 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.696782 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-szsbj" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.795892 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.796127 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnkz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5p48j_openshift-marketplace(42c5c29d-eebc-40b2-8a6d-a7a592efd69d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.797504 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5p48j" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.815368 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.815555 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4wrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zqqvv_openshift-marketplace(d4371d7c-f72d-4765-9101-34946d11d0e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.816893 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zqqvv" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.237636 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5p48j" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.357338 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.357747 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2d5jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w4vpf_openshift-marketplace(7de9fc74-9948-4e73-ac93-25f9c22189ce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.359144 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w4vpf" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.364208 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.364322 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqjd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nhqpx_openshift-marketplace(fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.391415 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nhqpx" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.407607 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.407979 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lm8p5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cxc6z_openshift-marketplace(e4ed5142-92c2-4f59-a383-f91999ce3dff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.409334 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cxc6z" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.466707 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.467104 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftms7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gzr9v_openshift-marketplace(f0fe12da-fb7d-444b-b8d3-47e5988fb7f9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.468298 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gzr9v" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" Jan 28 15:22:14 crc kubenswrapper[4656]: I0128 15:22:14.771067 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:22:14 crc kubenswrapper[4656]: I0128 15:22:14.812884 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:14 crc kubenswrapper[4656]: I0128 15:22:14.812952 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:14 crc kubenswrapper[4656]: I0128 15:22:14.839459 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:22:14 crc kubenswrapper[4656]: W0128 15:22:14.845380 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode313033e_82cc_4bb8_8151_f867b966a330.slice/crio-c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef WatchSource:0}: Error finding container c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef: Status 404 returned error can't find the container with id c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.109069 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrrnn" event={"ID":"d903ef3d-1544-4343-b254-15939a05fec0","Type":"ContainerStarted","Data":"8c4abed353fff3de84778aa7698659c7b27f4547e5b6249ec3f720a9ea39afd8"} Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.111572 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.111698 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.111768 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.116675 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e313033e-82cc-4bb8-8151-f867b966a330","Type":"ContainerStarted","Data":"c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef"} Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.128745 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23d13458-d18f-4e50-bd07-61d18319b4c7","Type":"ContainerStarted","Data":"ecf52d47e7e50c6dd46bf76ed0fc0d2fa9f91e02bebe98c13fefcc025a606204"} Jan 28 15:22:15 crc kubenswrapper[4656]: E0128 15:22:15.139645 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cxc6z" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" Jan 28 15:22:15 crc kubenswrapper[4656]: E0128 15:22:15.139861 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gzr9v" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" Jan 28 15:22:15 crc kubenswrapper[4656]: E0128 15:22:15.139934 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-w4vpf" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" Jan 28 15:22:15 crc kubenswrapper[4656]: E0128 15:22:15.140012 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nhqpx" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.135972 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e313033e-82cc-4bb8-8151-f867b966a330","Type":"ContainerDied","Data":"5dce75929921c7f43d6dd2a96e286682fa25de5c7e3b0e2c341f9b6c1171c3c3"} Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.135786 4656 generic.go:334] "Generic (PLEG): container finished" podID="e313033e-82cc-4bb8-8151-f867b966a330" containerID="5dce75929921c7f43d6dd2a96e286682fa25de5c7e3b0e2c341f9b6c1171c3c3" exitCode=0 Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.137974 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23d13458-d18f-4e50-bd07-61d18319b4c7","Type":"ContainerStarted","Data":"1b53bded64e844d0eb1de8d68357820c2e6d1bfc568f8f1b250cc8890c0cc777"} Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.138697 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.138924 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.179440 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.179414767 podStartE2EDuration="5.179414767s" podCreationTimestamp="2026-01-28 15:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:16.175949309 +0000 UTC m=+226.684120133" watchObservedRunningTime="2026-01-28 15:22:16.179414767 +0000 UTC m=+226.687585571" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.145429 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.145497 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.497691 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.555837 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") pod \"e313033e-82cc-4bb8-8151-f867b966a330\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.555897 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") pod \"e313033e-82cc-4bb8-8151-f867b966a330\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.556192 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e313033e-82cc-4bb8-8151-f867b966a330" (UID: "e313033e-82cc-4bb8-8151-f867b966a330"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.573039 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e313033e-82cc-4bb8-8151-f867b966a330" (UID: "e313033e-82cc-4bb8-8151-f867b966a330"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.657103 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.657171 4656 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.988350 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:22:18 crc kubenswrapper[4656]: I0128 15:22:18.151402 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e313033e-82cc-4bb8-8151-f867b966a330","Type":"ContainerDied","Data":"c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef"} Jan 28 15:22:18 crc kubenswrapper[4656]: I0128 15:22:18.151708 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef" Jan 28 15:22:18 crc kubenswrapper[4656]: I0128 15:22:18.151501 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:24 crc kubenswrapper[4656]: I0128 15:22:24.810661 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:22:27 crc kubenswrapper[4656]: I0128 15:22:27.235779 4656 generic.go:334] "Generic (PLEG): container finished" podID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerID="f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f" exitCode=0 Jan 28 15:22:27 crc kubenswrapper[4656]: I0128 15:22:27.235949 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerDied","Data":"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f"} Jan 28 15:22:27 crc kubenswrapper[4656]: I0128 15:22:27.241331 4656 generic.go:334] "Generic (PLEG): container finished" podID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerID="5b4233be7c7686331f99bc85bff4b844e11a65ed773671095a89af9d01cb614c" exitCode=0 Jan 28 15:22:27 crc kubenswrapper[4656]: I0128 15:22:27.241379 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerDied","Data":"5b4233be7c7686331f99bc85bff4b844e11a65ed773671095a89af9d01cb614c"} Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.264019 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.264589 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.264737 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.265967 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.266065 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1" gracePeriod=600 Jan 28 15:22:43 crc kubenswrapper[4656]: I0128 15:22:43.048475 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" containerID="cri-o://a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48" gracePeriod=15 Jan 28 15:22:43 crc kubenswrapper[4656]: I0128 15:22:43.349434 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1" exitCode=0 Jan 28 15:22:43 crc kubenswrapper[4656]: I0128 15:22:43.349492 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1"} Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.356076 4656 generic.go:334] "Generic (PLEG): container finished" podID="74b5802b-b8fb-48d1-8723-2c78386825db" containerID="a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48" exitCode=0 Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.356191 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" event={"ID":"74b5802b-b8fb-48d1-8723-2c78386825db","Type":"ContainerDied","Data":"a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48"} Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.836861 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.898491 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl"] Jan 28 15:22:44 crc kubenswrapper[4656]: E0128 15:22:44.898957 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e313033e-82cc-4bb8-8151-f867b966a330" containerName="pruner" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.898984 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e313033e-82cc-4bb8-8151-f867b966a330" containerName="pruner" Jan 28 15:22:44 crc kubenswrapper[4656]: E0128 15:22:44.899010 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.899018 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.899211 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.899425 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e313033e-82cc-4bb8-8151-f867b966a330" containerName="pruner" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.900009 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.906117 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl"] Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.948974 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949009 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949034 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949094 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949138 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949155 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949191 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949217 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949248 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949272 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949328 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949368 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949397 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949426 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.950143 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.950821 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.950825 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.951497 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.951763 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.957795 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l" (OuterVolumeSpecName: "kube-api-access-lmz5l") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "kube-api-access-lmz5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.965274 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.966372 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.968151 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.969605 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.975603 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.976442 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.977213 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.977664 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052002 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-router-certs\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052052 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052129 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052175 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052220 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052244 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-session\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052290 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhn8\" (UniqueName: \"kubernetes.io/projected/1424baf7-9559-4343-819e-a2a31200759f-kube-api-access-6mhn8\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052317 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-audit-policies\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052359 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-service-ca\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052379 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052396 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-error\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052426 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-login\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052454 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1424baf7-9559-4343-819e-a2a31200759f-audit-dir\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052483 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052530 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052543 4656 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052553 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052562 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052574 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052585 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052593 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052603 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052612 4656 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052621 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052630 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052640 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052649 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052657 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.154885 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155374 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155414 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155451 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155499 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-session\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155542 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mhn8\" (UniqueName: \"kubernetes.io/projected/1424baf7-9559-4343-819e-a2a31200759f-kube-api-access-6mhn8\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155569 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-audit-policies\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155610 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-service-ca\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155636 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155661 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-error\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.156755 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.156758 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-service-ca\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155697 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-login\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157291 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-audit-policies\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157308 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1424baf7-9559-4343-819e-a2a31200759f-audit-dir\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157355 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157400 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-router-certs\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157663 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1424baf7-9559-4343-819e-a2a31200759f-audit-dir\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.158749 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.160235 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.160802 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.163191 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.164506 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-login\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.167220 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-error\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.171021 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.173564 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-session\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.177430 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-router-certs\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.179547 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mhn8\" (UniqueName: \"kubernetes.io/projected/1424baf7-9559-4343-819e-a2a31200759f-kube-api-access-6mhn8\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.221211 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.398495 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerStarted","Data":"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.401508 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerStarted","Data":"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.403849 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerStarted","Data":"359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.405689 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerStarted","Data":"10ebf6db7ee2964bf2e7f1a9dc6ff72c579ec55214bcb19505e1b006f8bbcdba"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.409445 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" event={"ID":"74b5802b-b8fb-48d1-8723-2c78386825db","Type":"ContainerDied","Data":"4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.409510 4656 scope.go:117] "RemoveContainer" containerID="a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.409618 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.411984 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerStarted","Data":"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.414104 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerStarted","Data":"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.417148 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.433591 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8dc6j" podStartSLOduration=2.578684326 podStartE2EDuration="1m18.433568684s" podCreationTimestamp="2026-01-28 15:21:27 +0000 UTC" firstStartedPulling="2026-01-28 15:21:29.094794316 +0000 UTC m=+179.602965130" lastFinishedPulling="2026-01-28 15:22:44.949678684 +0000 UTC m=+255.457849488" observedRunningTime="2026-01-28 15:22:45.429829858 +0000 UTC m=+255.938000672" watchObservedRunningTime="2026-01-28 15:22:45.433568684 +0000 UTC m=+255.941739498" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.492591 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqqvv" podStartSLOduration=3.866182555 podStartE2EDuration="1m17.492571789s" podCreationTimestamp="2026-01-28 15:21:28 +0000 UTC" firstStartedPulling="2026-01-28 15:21:31.281440752 +0000 UTC m=+181.789611556" lastFinishedPulling="2026-01-28 15:22:44.907829986 +0000 UTC m=+255.416000790" observedRunningTime="2026-01-28 15:22:45.488569786 +0000 UTC m=+255.996740590" watchObservedRunningTime="2026-01-28 15:22:45.492571789 +0000 UTC m=+256.000742593" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.589338 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl"] Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.639021 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.644192 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.434491 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerID="10ebf6db7ee2964bf2e7f1a9dc6ff72c579ec55214bcb19505e1b006f8bbcdba" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.434572 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerDied","Data":"10ebf6db7ee2964bf2e7f1a9dc6ff72c579ec55214bcb19505e1b006f8bbcdba"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.439397 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerID="94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.439472 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerDied","Data":"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.444655 4656 generic.go:334] "Generic (PLEG): container finished" podID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerID="6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.444746 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerDied","Data":"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.451852 4656 generic.go:334] "Generic (PLEG): container finished" podID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerID="3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.451924 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerDied","Data":"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.455653 4656 generic.go:334] "Generic (PLEG): container finished" podID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerID="a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.455716 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerDied","Data":"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.458890 4656 generic.go:334] "Generic (PLEG): container finished" podID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerID="3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.458943 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerDied","Data":"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.473254 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" event={"ID":"1424baf7-9559-4343-819e-a2a31200759f","Type":"ContainerStarted","Data":"b8607571541c31959c59bdccd5cee993218f221122b21d7bec939e2feab7e877"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.473308 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" event={"ID":"1424baf7-9559-4343-819e-a2a31200759f","Type":"ContainerStarted","Data":"306607f83c7b0a1c0f1315734ac758e6780e687fd87c131f4e36d3b541977411"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.473523 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.480254 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.594937 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" podStartSLOduration=28.5949138 podStartE2EDuration="28.5949138s" podCreationTimestamp="2026-01-28 15:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:46.592204503 +0000 UTC m=+257.100375307" watchObservedRunningTime="2026-01-28 15:22:46.5949138 +0000 UTC m=+257.103084604" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.194791 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" path="/var/lib/kubelet/pods/74b5802b-b8fb-48d1-8723-2c78386825db/volumes" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.479257 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerStarted","Data":"98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.482371 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerStarted","Data":"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.484138 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerStarted","Data":"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.485710 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerStarted","Data":"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.487290 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerStarted","Data":"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.488900 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerStarted","Data":"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.508198 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cxc6z" podStartSLOduration=2.7609773029999998 podStartE2EDuration="1m21.508179502s" podCreationTimestamp="2026-01-28 15:21:26 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.066674348 +0000 UTC m=+178.574845152" lastFinishedPulling="2026-01-28 15:22:46.813876547 +0000 UTC m=+257.322047351" observedRunningTime="2026-01-28 15:22:47.506401282 +0000 UTC m=+258.014572076" watchObservedRunningTime="2026-01-28 15:22:47.508179502 +0000 UTC m=+258.016350306" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.543714 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5p48j" podStartSLOduration=4.458827626 podStartE2EDuration="1m23.543692931s" podCreationTimestamp="2026-01-28 15:21:24 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.015825637 +0000 UTC m=+178.523996441" lastFinishedPulling="2026-01-28 15:22:47.100690942 +0000 UTC m=+257.608861746" observedRunningTime="2026-01-28 15:22:47.538768461 +0000 UTC m=+258.046939265" watchObservedRunningTime="2026-01-28 15:22:47.543692931 +0000 UTC m=+258.051863735" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.562436 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w4vpf" podStartSLOduration=4.610898987 podStartE2EDuration="1m23.562419092s" podCreationTimestamp="2026-01-28 15:21:24 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.056096454 +0000 UTC m=+178.564267258" lastFinishedPulling="2026-01-28 15:22:47.007616559 +0000 UTC m=+257.515787363" observedRunningTime="2026-01-28 15:22:47.560025634 +0000 UTC m=+258.068196438" watchObservedRunningTime="2026-01-28 15:22:47.562419092 +0000 UTC m=+258.070589896" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.577542 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gzr9v" podStartSLOduration=3.552805825 podStartE2EDuration="1m23.577523661s" podCreationTimestamp="2026-01-28 15:21:24 +0000 UTC" firstStartedPulling="2026-01-28 15:21:26.931814692 +0000 UTC m=+177.439985496" lastFinishedPulling="2026-01-28 15:22:46.956532528 +0000 UTC m=+257.464703332" observedRunningTime="2026-01-28 15:22:47.575634827 +0000 UTC m=+258.083805631" watchObservedRunningTime="2026-01-28 15:22:47.577523661 +0000 UTC m=+258.085694465" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.597681 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nhqpx" podStartSLOduration=2.807131514 podStartE2EDuration="1m21.597662453s" podCreationTimestamp="2026-01-28 15:21:26 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.036339087 +0000 UTC m=+178.544509891" lastFinishedPulling="2026-01-28 15:22:46.826870026 +0000 UTC m=+257.335040830" observedRunningTime="2026-01-28 15:22:47.595035048 +0000 UTC m=+258.103205852" watchObservedRunningTime="2026-01-28 15:22:47.597662453 +0000 UTC m=+258.105833257" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.620900 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-szsbj" podStartSLOduration=4.72395908 podStartE2EDuration="1m23.620877262s" podCreationTimestamp="2026-01-28 15:21:24 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.004900973 +0000 UTC m=+178.513071777" lastFinishedPulling="2026-01-28 15:22:46.901819155 +0000 UTC m=+257.409989959" observedRunningTime="2026-01-28 15:22:47.616489708 +0000 UTC m=+258.124660522" watchObservedRunningTime="2026-01-28 15:22:47.620877262 +0000 UTC m=+258.129048066" Jan 28 15:22:48 crc kubenswrapper[4656]: I0128 15:22:48.237807 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:22:48 crc kubenswrapper[4656]: I0128 15:22:48.237878 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:22:48 crc kubenswrapper[4656]: I0128 15:22:48.643846 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:22:48 crc kubenswrapper[4656]: I0128 15:22:48.645288 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:22:49 crc kubenswrapper[4656]: I0128 15:22:49.480204 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8dc6j" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" probeResult="failure" output=< Jan 28 15:22:49 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:22:49 crc kubenswrapper[4656]: > Jan 28 15:22:49 crc kubenswrapper[4656]: I0128 15:22:49.693068 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqqvv" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" probeResult="failure" output=< Jan 28 15:22:49 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:22:49 crc kubenswrapper[4656]: > Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.722779 4656 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724186 4656 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724370 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724574 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724590 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724676 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724624 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724729 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726375 4656 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726571 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726583 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726596 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726603 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726610 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726635 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726644 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726649 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726662 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726670 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.729198 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729235 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.729248 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729255 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.729270 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729276 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729490 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729511 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729522 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729530 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729538 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729549 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729557 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.778689 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781525 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781563 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781602 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781623 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781658 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781699 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781843 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781876 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882707 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882760 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882795 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882813 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882835 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882852 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882879 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882902 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883018 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883061 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883085 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883108 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883127 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883145 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883183 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883206 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.074746 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:53 crc kubenswrapper[4656]: W0128 15:22:53.094033 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5153f6ea659d9f70f7b880773744e6ad1358b3f30271fc726e3c68b0fcdae782 WatchSource:0}: Error finding container 5153f6ea659d9f70f7b880773744e6ad1358b3f30271fc726e3c68b0fcdae782: Status 404 returned error can't find the container with id 5153f6ea659d9f70f7b880773744e6ad1358b3f30271fc726e3c68b0fcdae782 Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.097889 4656 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eee59cf14b4b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,LastTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.387254 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.388045 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.388408 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.388666 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.389005 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.389105 4656 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.389585 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="200ms" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.540931 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459"} Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.541038 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5153f6ea659d9f70f7b880773744e6ad1358b3f30271fc726e3c68b0fcdae782"} Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.541871 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.544096 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.545610 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546245 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" exitCode=0 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546264 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" exitCode=0 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546272 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" exitCode=0 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546284 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" exitCode=2 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546355 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.548484 4656 generic.go:334] "Generic (PLEG): container finished" podID="23d13458-d18f-4e50-bd07-61d18319b4c7" containerID="1b53bded64e844d0eb1de8d68357820c2e6d1bfc568f8f1b250cc8890c0cc777" exitCode=0 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.548513 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23d13458-d18f-4e50-bd07-61d18319b4c7","Type":"ContainerDied","Data":"1b53bded64e844d0eb1de8d68357820c2e6d1bfc568f8f1b250cc8890c0cc777"} Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.549191 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.549805 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.590519 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="400ms" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.991849 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="800ms" Jan 28 15:22:54 crc kubenswrapper[4656]: E0128 15:22:54.145711 4656 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eee59cf14b4b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,LastTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.558251 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.636954 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.637150 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.701750 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.702920 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.703177 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.703389 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: E0128 15:22:54.793852 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="1.6s" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.890853 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.891588 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.892044 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.892324 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.912509 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") pod \"23d13458-d18f-4e50-bd07-61d18319b4c7\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.912971 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") pod \"23d13458-d18f-4e50-bd07-61d18319b4c7\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.913124 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") pod \"23d13458-d18f-4e50-bd07-61d18319b4c7\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.912841 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23d13458-d18f-4e50-bd07-61d18319b4c7" (UID: "23d13458-d18f-4e50-bd07-61d18319b4c7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.913473 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock" (OuterVolumeSpecName: "var-lock") pod "23d13458-d18f-4e50-bd07-61d18319b4c7" (UID: "23d13458-d18f-4e50-bd07-61d18319b4c7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.937656 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23d13458-d18f-4e50-bd07-61d18319b4c7" (UID: "23d13458-d18f-4e50-bd07-61d18319b4c7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.014677 4656 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.014706 4656 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.014717 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.090587 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.091326 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.203078 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.204193 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.204577 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.205194 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.205605 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.244137 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.245081 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.245698 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.246139 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.246865 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.247232 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.247677 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.333385 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.333547 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.333996 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334116 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334225 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334144 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334695 4656 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334814 4656 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334901 4656 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.548502 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.549449 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.550473 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.550690 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.571627 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23d13458-d18f-4e50-bd07-61d18319b4c7","Type":"ContainerDied","Data":"ecf52d47e7e50c6dd46bf76ed0fc0d2fa9f91e02bebe98c13fefcc025a606204"} Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.571728 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf52d47e7e50c6dd46bf76ed0fc0d2fa9f91e02bebe98c13fefcc025a606204" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.571883 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.578931 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.580593 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581038 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581271 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581507 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" exitCode=0 Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581671 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581761 4656 scope.go:117] "RemoveContainer" containerID="b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581747 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.582469 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.597493 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.597988 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.598394 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.599413 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.599841 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.625433 4656 scope.go:117] "RemoveContainer" containerID="7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.625530 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.626181 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.626523 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.626849 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.627267 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.627624 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.627812 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.633038 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.633598 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.633882 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.634210 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.635234 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.635654 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.635959 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.636239 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.644270 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.644834 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.645396 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.645862 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.646323 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.646590 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.646844 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.646858 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.647077 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.647744 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.648246 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.648511 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.648750 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.648977 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.649225 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.649552 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.649701 4656 scope.go:117] "RemoveContainer" containerID="0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.670251 4656 scope.go:117] "RemoveContainer" containerID="da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.693004 4656 scope.go:117] "RemoveContainer" containerID="82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.753179 4656 scope.go:117] "RemoveContainer" containerID="7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.775790 4656 scope.go:117] "RemoveContainer" containerID="b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.776438 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\": container with ID starting with b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821 not found: ID does not exist" containerID="b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.776488 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821"} err="failed to get container status \"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\": rpc error: code = NotFound desc = could not find container \"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\": container with ID starting with b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821 not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.776518 4656 scope.go:117] "RemoveContainer" containerID="7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.777956 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\": container with ID starting with 7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671 not found: ID does not exist" containerID="7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.777985 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671"} err="failed to get container status \"7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\": rpc error: code = NotFound desc = could not find container \"7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\": container with ID starting with 7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671 not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778003 4656 scope.go:117] "RemoveContainer" containerID="0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.778287 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\": container with ID starting with 0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c not found: ID does not exist" containerID="0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778357 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c"} err="failed to get container status \"0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\": rpc error: code = NotFound desc = could not find container \"0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\": container with ID starting with 0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778377 4656 scope.go:117] "RemoveContainer" containerID="da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.778754 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\": container with ID starting with da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1 not found: ID does not exist" containerID="da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778813 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1"} err="failed to get container status \"da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\": rpc error: code = NotFound desc = could not find container \"da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\": container with ID starting with da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1 not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778853 4656 scope.go:117] "RemoveContainer" containerID="82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.779126 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\": container with ID starting with 82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f not found: ID does not exist" containerID="82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.779193 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f"} err="failed to get container status \"82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\": rpc error: code = NotFound desc = could not find container \"82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\": container with ID starting with 82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.779276 4656 scope.go:117] "RemoveContainer" containerID="7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.779991 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\": container with ID starting with 7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197 not found: ID does not exist" containerID="7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.780021 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197"} err="failed to get container status \"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\": rpc error: code = NotFound desc = could not find container \"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\": container with ID starting with 7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197 not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.876246 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:22:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:22:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:22:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:22:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:80389051bc0ea34449a3ee9b5472446041cb0f2e47fa9d2048010428fa1019ba\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:e9a0654b0e53f31c6f63037d06bc5145dc7b9c46a7ac2d778d473d966efb9e14\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1675675872},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1202349806},{\\\"names\\\":[],\\\"sizeBytes\\\":1187310829},{\\\"names\\\":[],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.877380 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.877914 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.878394 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.878640 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.878663 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:22:56 crc kubenswrapper[4656]: E0128 15:22:56.394530 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="3.2s" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.633824 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.634587 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.634844 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.635089 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.635496 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.635727 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.635950 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.636122 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.638413 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.638677 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.638855 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639034 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639243 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639428 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639630 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639825 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.669478 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.669625 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.710250 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.710949 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.711407 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.711748 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.712048 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.712417 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.712820 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.713117 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.713474 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.116559 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.116856 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.179714 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.181551 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.182207 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.182591 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.182954 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.183272 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.183457 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.183667 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.183921 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.184205 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.659541 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.660141 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.660496 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.660737 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.660814 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.661032 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.661393 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.661615 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.661906 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.662215 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.662620 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.662861 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.663092 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.663437 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.663713 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.664575 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.664989 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.665181 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.293072 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.293730 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.294096 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.294527 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.294876 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.295069 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.295343 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.296250 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.296683 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.296885 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.337251 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.337708 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.337896 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338065 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338226 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338440 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338678 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338836 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.339032 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.339298 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.681262 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.682292 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.682643 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.683122 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.683382 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.683595 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.683825 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.684081 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.684343 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.684566 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.684781 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.720440 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.720892 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721293 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721478 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721621 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721797 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721968 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.722112 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.722298 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.722455 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.722604 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:59 crc kubenswrapper[4656]: E0128 15:22:59.595036 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="6.4s" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.173567 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.174084 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.174300 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.174470 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.174776 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.175452 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.175712 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.176092 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.176442 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.177470 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:04 crc kubenswrapper[4656]: E0128 15:23:04.147099 4656 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eee59cf14b4b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,LastTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:23:05 crc kubenswrapper[4656]: E0128 15:23:05.173009 4656 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" volumeName="registry-storage" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.661789 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.661866 4656 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c" exitCode=1 Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.661909 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c"} Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.662569 4656 scope.go:117] "RemoveContainer" containerID="08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.663017 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.663547 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.663750 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.663989 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.664239 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.664461 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.665488 4656 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.666008 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.666267 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.666450 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.666594 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: E0128 15:23:05.996093 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="7s" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.093914 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.113878 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:23:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:23:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:23:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:23:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:80389051bc0ea34449a3ee9b5472446041cb0f2e47fa9d2048010428fa1019ba\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:e9a0654b0e53f31c6f63037d06bc5145dc7b9c46a7ac2d778d473d966efb9e14\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1675675872},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1202349806},{\\\"names\\\":[],\\\"sizeBytes\\\":1187310829},{\\\"names\\\":[],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.114862 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.115304 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.115914 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.116533 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.116563 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.170002 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.171038 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.172287 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.172912 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.176321 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.176782 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.177048 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.177258 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.177582 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.177826 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.178076 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.178368 4656 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.191487 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.191531 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.191914 4656 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.192403 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.673682 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.673851 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"128b7d572711eb1bedbcebe7171d9e0ccb731d7abb7b0938208d1595a6627ed0"} Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.676215 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.676596 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.677057 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.677484 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.677860 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.678343 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.678692 4656 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.678954 4656 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4214e887b6e054911c39dc34a9c052ddf9ed1596e5f79a9ee81a9055a585840d" exitCode=0 Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.678993 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4214e887b6e054911c39dc34a9c052ddf9ed1596e5f79a9ee81a9055a585840d"} Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679008 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679022 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3c074a23ef153f447e5ea166f0f0633974aaee9b578f2c42b78227590475bd08"} Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679280 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679334 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679370 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.679642 4656 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679666 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679953 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.680486 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.680955 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.681356 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.681586 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.681798 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.681982 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.682193 4656 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.682459 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.682900 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.683262 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.683604 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:07 crc kubenswrapper[4656]: I0128 15:23:07.689100 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8f37329f616c78ad784c13eda77eaa88794e0052f7e00cd32d49715a2f9a4cf0"} Jan 28 15:23:07 crc kubenswrapper[4656]: I0128 15:23:07.689524 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"05d03fb362120040d14cb147447c1af0b7f5e4c3498035d023d552051966dc6c"} Jan 28 15:23:07 crc kubenswrapper[4656]: I0128 15:23:07.689541 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c2624c7462adb66d9805bea12ae87c3a87076025db60bee98300588697c6b1c"} Jan 28 15:23:07 crc kubenswrapper[4656]: I0128 15:23:07.689555 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9650d2ad1dc114811a53ef76665ab24eee05a00c56dd02b0e49cd508e6997d1a"} Jan 28 15:23:08 crc kubenswrapper[4656]: I0128 15:23:08.698338 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ec619f3e9442b716a8a88f3821ffe616904d9496df15c693b7995136e969b8fd"} Jan 28 15:23:08 crc kubenswrapper[4656]: I0128 15:23:08.699436 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:08 crc kubenswrapper[4656]: I0128 15:23:08.698724 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:08 crc kubenswrapper[4656]: I0128 15:23:08.699618 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:11 crc kubenswrapper[4656]: I0128 15:23:11.192772 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:11 crc kubenswrapper[4656]: I0128 15:23:11.192830 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:11 crc kubenswrapper[4656]: I0128 15:23:11.200678 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:13 crc kubenswrapper[4656]: I0128 15:23:13.726787 4656 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:13 crc kubenswrapper[4656]: I0128 15:23:13.952921 4656 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f3672fad-4cf4-4873-9065-1100f7b36ed0" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.730230 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.730543 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.733335 4656 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f3672fad-4cf4-4873-9065-1100f7b36ed0" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.736209 4656 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://9650d2ad1dc114811a53ef76665ab24eee05a00c56dd02b0e49cd508e6997d1a" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.736245 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.261685 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.261826 4656 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.261897 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.734846 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.734876 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.738061 4656 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f3672fad-4cf4-4873-9065-1100f7b36ed0" Jan 28 15:23:16 crc kubenswrapper[4656]: I0128 15:23:16.093984 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:23 crc kubenswrapper[4656]: I0128 15:23:23.933396 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:23:24 crc kubenswrapper[4656]: I0128 15:23:24.323871 4656 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:23:24 crc kubenswrapper[4656]: I0128 15:23:24.456944 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.120089 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.181059 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.261980 4656 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.262031 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.423338 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.645314 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.698986 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.948809 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.066581 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.109458 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.122740 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.180586 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.252054 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.336015 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.374129 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.431510 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.538925 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.561483 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.696637 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.805469 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.866244 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.886274 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.094902 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.148996 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.232495 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.251901 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.338571 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.435694 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.449363 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.472436 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.560951 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.747745 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.052978 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.235303 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.312222 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.403508 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.503316 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.521351 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.529082 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.543470 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.655532 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.828284 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.842768 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.032657 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.041299 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.074011 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.175660 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.187593 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.208114 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.216955 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.218393 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.409203 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.409203 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.467568 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.471078 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.487856 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.546951 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.614410 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.719989 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.783346 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.786130 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.805469 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.837354 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.850806 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.960881 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.030113 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.049774 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.086288 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.163134 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.204917 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.231301 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.368880 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.370950 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.407683 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.425572 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.429823 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.461673 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.465690 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.500460 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.568024 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.604298 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.640559 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.750229 4656 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.773056 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.784562 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.865559 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.888913 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.928948 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.021661 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.109339 4656 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.243481 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.284708 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.414423 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.474475 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.605698 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.644444 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.764997 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.765954 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.797030 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.819842 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.904499 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.926744 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.991915 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.000738 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.026457 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.040532 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.123480 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.152852 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.232121 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.315115 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.323195 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.385061 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.441280 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.489021 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.492005 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.536434 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.718037 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.755915 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.780877 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.805757 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.845435 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.902466 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.915584 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.938724 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.940135 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.087142 4656 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.220648 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.352958 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.434052 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.442709 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.477596 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.510183 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.519210 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.550040 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.559426 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.561662 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.603578 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.610211 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.664554 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.726935 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.781092 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.786280 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.791252 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.802609 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.894604 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.911712 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.959150 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.179658 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.289914 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.292254 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.350486 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.481250 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.488740 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.493865 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.601387 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.634262 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.669835 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.692402 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.802577 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.856047 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.923964 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.960011 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.079118 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.254666 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.267851 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.271928 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.276927 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.313537 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.330582 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.369549 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.378369 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.390193 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.434665 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.437117 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.453716 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.528369 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.585354 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.615729 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.658086 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.689269 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.751470 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.892597 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.912020 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.960083 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.031133 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.063114 4656 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.066648 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.135981 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.231877 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.275969 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.321348 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.464966 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.483471 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.534825 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.564912 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.582954 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.601286 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.609259 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.628040 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.666145 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.805235 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.840988 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.896655 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.976018 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.018851 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.027859 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.067809 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.081797 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.115201 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.173096 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.205739 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.253094 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.348388 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.430138 4656 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.457326 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.474624 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.573033 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.618035 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.648576 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.667556 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.673275 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.743838 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.791729 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.920271 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.027880 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.040917 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.106899 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.122132 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.176804 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.285310 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.289462 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.350294 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.356660 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.396016 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.504474 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.513570 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.632210 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.744486 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.752200 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.778151 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.898608 4656 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.901257 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.901222664 podStartE2EDuration="46.901222664s" podCreationTimestamp="2026-01-28 15:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:13.878436321 +0000 UTC m=+284.386607135" watchObservedRunningTime="2026-01-28 15:23:38.901222664 +0000 UTC m=+309.409393478" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.904211 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.904268 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.914087 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.925958 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.925935404 podStartE2EDuration="25.925935404s" podCreationTimestamp="2026-01-28 15:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:38.924844335 +0000 UTC m=+309.433015149" watchObservedRunningTime="2026-01-28 15:23:38.925935404 +0000 UTC m=+309.434106218" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.028608 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.165893 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.210187 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.443406 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.506938 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.619637 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.707481 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.715209 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.756946 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.822875 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:23:41 crc kubenswrapper[4656]: I0128 15:23:41.065001 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:23:41 crc kubenswrapper[4656]: I0128 15:23:41.317146 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:23:41 crc kubenswrapper[4656]: I0128 15:23:41.657240 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:23:47 crc kubenswrapper[4656]: I0128 15:23:47.652501 4656 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:23:47 crc kubenswrapper[4656]: I0128 15:23:47.653291 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" gracePeriod=5 Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.224483 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.224909 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334672 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334745 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334828 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334890 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334907 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.335488 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.335488 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.335525 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.335496 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.352940 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.401076 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.401423 4656 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" exitCode=137 Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.401509 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.401587 4656 scope.go:117] "RemoveContainer" containerID="c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.416737 4656 scope.go:117] "RemoveContainer" containerID="c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" Jan 28 15:23:53 crc kubenswrapper[4656]: E0128 15:23:53.417275 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459\": container with ID starting with c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459 not found: ID does not exist" containerID="c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.417400 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459"} err="failed to get container status \"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459\": rpc error: code = NotFound desc = could not find container \"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459\": container with ID starting with c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459 not found: ID does not exist" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436504 4656 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436538 4656 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436553 4656 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436561 4656 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436569 4656 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.176927 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.177197 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.187129 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.187181 4656 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0d242c0c-b094-4c8d-88b5-e88f0447cf64" Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.191417 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.191437 4656 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0d242c0c-b094-4c8d-88b5-e88f0447cf64" Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.421410 4656 generic.go:334] "Generic (PLEG): container finished" podID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" exitCode=0 Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.421541 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerDied","Data":"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469"} Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.422669 4656 scope.go:117] "RemoveContainer" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.740453 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.741508 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:23:57 crc kubenswrapper[4656]: I0128 15:23:57.430596 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerStarted","Data":"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4"} Jan 28 15:23:57 crc kubenswrapper[4656]: I0128 15:23:57.431735 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:23:57 crc kubenswrapper[4656]: I0128 15:23:57.435122 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.184410 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.185024 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" containerID="cri-o://f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf" gracePeriod=30 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.284557 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.284744 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" containerID="cri-o://2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97" gracePeriod=30 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.384469 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.384786 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5p48j" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="registry-server" containerID="cri-o://8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" gracePeriod=2 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.460423 4656 generic.go:334] "Generic (PLEG): container finished" podID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerID="f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf" exitCode=0 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.460507 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" event={"ID":"97f85e75-6682-490f-9f1d-cdf924a67f38","Type":"ContainerDied","Data":"f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf"} Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.462260 4656 generic.go:334] "Generic (PLEG): container finished" podID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerID="2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97" exitCode=0 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.462297 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" event={"ID":"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7","Type":"ContainerDied","Data":"2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97"} Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.704352 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.710022 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.782146 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.782556 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-szsbj" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="registry-server" containerID="cri-o://cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" gracePeriod=2 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.797046 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.886916 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.886976 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887016 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887051 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887068 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887112 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887145 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887199 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887838 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config" (OuterVolumeSpecName: "config") pod "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.888608 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.888857 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.888877 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config" (OuterVolumeSpecName: "config") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.888921 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca" (OuterVolumeSpecName: "client-ca") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.890845 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891314 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891331 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891343 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891352 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891361 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.894380 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6" (OuterVolumeSpecName: "kube-api-access-d7qd6") pod "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7"). InnerVolumeSpecName "kube-api-access-d7qd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.895411 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff" (OuterVolumeSpecName: "kube-api-access-s67ff") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "kube-api-access-s67ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.895670 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.895681 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.992312 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") pod \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.992417 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") pod \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.992526 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") pod \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993823 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993844 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993889 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities" (OuterVolumeSpecName: "utilities") pod "42c5c29d-eebc-40b2-8a6d-a7a592efd69d" (UID: "42c5c29d-eebc-40b2-8a6d-a7a592efd69d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993860 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993989 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.996518 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5" (OuterVolumeSpecName: "kube-api-access-xnkz5") pod "42c5c29d-eebc-40b2-8a6d-a7a592efd69d" (UID: "42c5c29d-eebc-40b2-8a6d-a7a592efd69d"). InnerVolumeSpecName "kube-api-access-xnkz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.055901 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42c5c29d-eebc-40b2-8a6d-a7a592efd69d" (UID: "42c5c29d-eebc-40b2-8a6d-a7a592efd69d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.095601 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.095671 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.095687 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.124070 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.196147 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") pod \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.196233 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") pod \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.197040 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities" (OuterVolumeSpecName: "utilities") pod "d6812603-edd0-45f4-b2b3-6d9ece7e98c2" (UID: "d6812603-edd0-45f4-b2b3-6d9ece7e98c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.204319 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") pod \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.204758 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.208094 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt" (OuterVolumeSpecName: "kube-api-access-nj8vt") pod "d6812603-edd0-45f4-b2b3-6d9ece7e98c2" (UID: "d6812603-edd0-45f4-b2b3-6d9ece7e98c2"). InnerVolumeSpecName "kube-api-access-nj8vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.238681 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6812603-edd0-45f4-b2b3-6d9ece7e98c2" (UID: "d6812603-edd0-45f4-b2b3-6d9ece7e98c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.306472 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.306501 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471731 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerID="cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" exitCode=0 Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471780 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerDied","Data":"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471813 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471857 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerDied","Data":"924a066d9ecea51a84946c62ff66f97e3aaf6278887d64d1b9553585e9692514"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471896 4656 scope.go:117] "RemoveContainer" containerID="cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.476266 4656 generic.go:334] "Generic (PLEG): container finished" podID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerID="8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" exitCode=0 Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.476335 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerDied","Data":"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.476361 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.476365 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerDied","Data":"97e67bc73ce17661993af527d6be763d8d88936ad8acbd056a6ba60090f1140e"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.482800 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.482969 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" event={"ID":"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7","Type":"ContainerDied","Data":"8331e16235e5a94b4f704932116317a3f82336aecaf9afbef2cef9053d0f0822"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.486014 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" event={"ID":"97f85e75-6682-490f-9f1d-cdf924a67f38","Type":"ContainerDied","Data":"c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.486114 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.497809 4656 scope.go:117] "RemoveContainer" containerID="94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.609551 4656 scope.go:117] "RemoveContainer" containerID="b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.618663 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.625291 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.628439 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.633557 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.639966 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.640775 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.649475 4656 scope.go:117] "RemoveContainer" containerID="cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.649692 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.649999 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723\": container with ID starting with cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723 not found: ID does not exist" containerID="cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.650034 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723"} err="failed to get container status \"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723\": rpc error: code = NotFound desc = could not find container \"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723\": container with ID starting with cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.650056 4656 scope.go:117] "RemoveContainer" containerID="94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.653289 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2\": container with ID starting with 94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2 not found: ID does not exist" containerID="94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.653453 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2"} err="failed to get container status \"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2\": rpc error: code = NotFound desc = could not find container \"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2\": container with ID starting with 94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.653559 4656 scope.go:117] "RemoveContainer" containerID="b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.656216 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.656356 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414\": container with ID starting with b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414 not found: ID does not exist" containerID="b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.656381 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414"} err="failed to get container status \"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414\": rpc error: code = NotFound desc = could not find container \"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414\": container with ID starting with b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.656402 4656 scope.go:117] "RemoveContainer" containerID="8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.660670 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.661463 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="extract-utilities" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.661540 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="extract-utilities" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.661687 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.661762 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.661840 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" containerName="installer" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.661909 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" containerName="installer" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.661977 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="extract-content" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662033 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="extract-content" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662092 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662280 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662368 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="extract-utilities" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662626 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="extract-utilities" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662699 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662767 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662841 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="extract-content" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662928 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="extract-content" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662999 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.663092 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.663279 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.663367 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.663908 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664009 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664099 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664179 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" containerName="installer" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664255 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664316 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664913 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.667099 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.667808 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.677841 4656 scope.go:117] "RemoveContainer" containerID="6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.690450 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.692340 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.693115 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.695740 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.697675 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.697738 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.698378 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.698496 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.698620 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.699074 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.699376 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.699566 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.699932 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.704900 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.707797 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.711884 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.711968 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712001 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712045 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712070 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712095 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712171 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712219 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712258 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.721502 4656 scope.go:117] "RemoveContainer" containerID="cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745053 4656 scope.go:117] "RemoveContainer" containerID="8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.745500 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c\": container with ID starting with 8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c not found: ID does not exist" containerID="8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745527 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c"} err="failed to get container status \"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c\": rpc error: code = NotFound desc = could not find container \"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c\": container with ID starting with 8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745550 4656 scope.go:117] "RemoveContainer" containerID="6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.745744 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2\": container with ID starting with 6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2 not found: ID does not exist" containerID="6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745772 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2"} err="failed to get container status \"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2\": rpc error: code = NotFound desc = could not find container \"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2\": container with ID starting with 6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745786 4656 scope.go:117] "RemoveContainer" containerID="cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.745981 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98\": container with ID starting with cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98 not found: ID does not exist" containerID="cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.746002 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98"} err="failed to get container status \"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98\": rpc error: code = NotFound desc = could not find container \"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98\": container with ID starting with cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.746017 4656 scope.go:117] "RemoveContainer" containerID="2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.759656 4656 scope.go:117] "RemoveContainer" containerID="f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813761 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813819 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813867 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813919 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.814941 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813946 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815061 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815126 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815083 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815580 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.816203 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.818057 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.818408 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.819507 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.825832 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.833488 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.841987 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.003418 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.010998 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.049813 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.055225 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.185711 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" path="/var/lib/kubelet/pods/42c5c29d-eebc-40b2-8a6d-a7a592efd69d/volumes" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.186847 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" path="/var/lib/kubelet/pods/97f85e75-6682-490f-9f1d-cdf924a67f38/volumes" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.188450 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" path="/var/lib/kubelet/pods/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7/volumes" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.189607 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" path="/var/lib/kubelet/pods/d6812603-edd0-45f4-b2b3-6d9ece7e98c2/volumes" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.341376 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:03 crc kubenswrapper[4656]: W0128 15:24:03.352745 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d2dc2b5_f3f8_4be6_87ec_08e25875d581.slice/crio-97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89 WatchSource:0}: Error finding container 97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89: Status 404 returned error can't find the container with id 97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89 Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.539458 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" event={"ID":"7d2dc2b5-f3f8-4be6-87ec-08e25875d581","Type":"ContainerStarted","Data":"97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89"} Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.600394 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.979926 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.980669 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cxc6z" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="registry-server" containerID="cri-o://98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb" gracePeriod=2 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.182692 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.183072 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqqvv" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" containerID="cri-o://359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e" gracePeriod=2 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.601521 4656 generic.go:334] "Generic (PLEG): container finished" podID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerID="359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e" exitCode=0 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.601860 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerDied","Data":"359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.633139 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerID="98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb" exitCode=0 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.633245 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerDied","Data":"98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.634959 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" event={"ID":"7d2dc2b5-f3f8-4be6-87ec-08e25875d581","Type":"ContainerStarted","Data":"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.635173 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerName="controller-manager" containerID="cri-o://f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" gracePeriod=30 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.635696 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.670632 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.670936 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" event={"ID":"986d7566-ac0b-4651-b2ea-3a839466d5be","Type":"ContainerStarted","Data":"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.670966 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" event={"ID":"986d7566-ac0b-4651-b2ea-3a839466d5be","Type":"ContainerStarted","Data":"e1f2c684d015240031f9a01efcbd97b6607151f5966c4bcd7cb98bb3dac1e473"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.671282 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" containerID="cri-o://07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" gracePeriod=30 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.671851 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.680748 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.785513 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" podStartSLOduration=3.785477943 podStartE2EDuration="3.785477943s" podCreationTimestamp="2026-01-28 15:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:04.672215647 +0000 UTC m=+335.180386471" watchObservedRunningTime="2026-01-28 15:24:04.785477943 +0000 UTC m=+335.293648747" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.819697 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" podStartSLOduration=3.81967182 podStartE2EDuration="3.81967182s" podCreationTimestamp="2026-01-28 15:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:04.817502169 +0000 UTC m=+335.325672983" watchObservedRunningTime="2026-01-28 15:24:04.81967182 +0000 UTC m=+335.327842624" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.872975 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") pod \"e4ed5142-92c2-4f59-a383-f91999ce3dff\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.873076 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") pod \"e4ed5142-92c2-4f59-a383-f91999ce3dff\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.873127 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") pod \"e4ed5142-92c2-4f59-a383-f91999ce3dff\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.876456 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities" (OuterVolumeSpecName: "utilities") pod "e4ed5142-92c2-4f59-a383-f91999ce3dff" (UID: "e4ed5142-92c2-4f59-a383-f91999ce3dff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.878844 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.890653 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5" (OuterVolumeSpecName: "kube-api-access-lm8p5") pod "e4ed5142-92c2-4f59-a383-f91999ce3dff" (UID: "e4ed5142-92c2-4f59-a383-f91999ce3dff"). InnerVolumeSpecName "kube-api-access-lm8p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.901907 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4ed5142-92c2-4f59-a383-f91999ce3dff" (UID: "e4ed5142-92c2-4f59-a383-f91999ce3dff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.973960 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") pod \"d4371d7c-f72d-4765-9101-34946d11d0e7\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974389 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") pod \"d4371d7c-f72d-4765-9101-34946d11d0e7\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974505 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") pod \"d4371d7c-f72d-4765-9101-34946d11d0e7\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974805 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974884 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974980 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.978757 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities" (OuterVolumeSpecName: "utilities") pod "d4371d7c-f72d-4765-9101-34946d11d0e7" (UID: "d4371d7c-f72d-4765-9101-34946d11d0e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.979663 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb" (OuterVolumeSpecName: "kube-api-access-z4wrb") pod "d4371d7c-f72d-4765-9101-34946d11d0e7" (UID: "d4371d7c-f72d-4765-9101-34946d11d0e7"). InnerVolumeSpecName "kube-api-access-z4wrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.075843 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.075925 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.125855 4656 patch_prober.go:28] interesting pod/route-controller-manager-6c967b544-dfmnj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:42858->10.217.0.58:8443: read: connection reset by peer" start-of-body= Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.126216 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:42858->10.217.0.58:8443: read: connection reset by peer" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.126696 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.144852 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4371d7c-f72d-4765-9101-34946d11d0e7" (UID: "d4371d7c-f72d-4765-9101-34946d11d0e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.176875 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.176967 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.177006 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.177021 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.177066 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.177390 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.181379 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.181436 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config" (OuterVolumeSpecName: "config") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.181786 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.185104 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.185298 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc" (OuterVolumeSpecName: "kube-api-access-25kqc") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "kube-api-access-25kqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279249 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279494 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279576 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279780 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279883 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.411724 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c967b544-dfmnj_986d7566-ac0b-4651-b2ea-3a839466d5be/route-controller-manager/0.log" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.411849 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.481545 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") pod \"986d7566-ac0b-4651-b2ea-3a839466d5be\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.481840 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") pod \"986d7566-ac0b-4651-b2ea-3a839466d5be\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.481949 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") pod \"986d7566-ac0b-4651-b2ea-3a839466d5be\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.482124 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") pod \"986d7566-ac0b-4651-b2ea-3a839466d5be\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.482668 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca" (OuterVolumeSpecName: "client-ca") pod "986d7566-ac0b-4651-b2ea-3a839466d5be" (UID: "986d7566-ac0b-4651-b2ea-3a839466d5be"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.483247 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config" (OuterVolumeSpecName: "config") pod "986d7566-ac0b-4651-b2ea-3a839466d5be" (UID: "986d7566-ac0b-4651-b2ea-3a839466d5be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.490607 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "986d7566-ac0b-4651-b2ea-3a839466d5be" (UID: "986d7566-ac0b-4651-b2ea-3a839466d5be"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.491334 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c" (OuterVolumeSpecName: "kube-api-access-lqf4c") pod "986d7566-ac0b-4651-b2ea-3a839466d5be" (UID: "986d7566-ac0b-4651-b2ea-3a839466d5be"). InnerVolumeSpecName "kube-api-access-lqf4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.583205 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.583237 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.583252 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.583260 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.678560 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerDied","Data":"212e83a1856ef8c98ccafed8d21edbd6de5e1d834be4a94f2e2855ab3ac7c760"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.678679 4656 scope.go:117] "RemoveContainer" containerID="359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.678906 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.683526 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.683515 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerDied","Data":"675ef5c857ca65a9f9f28b7fc7e7d9027ae949f4a6695cc47226b6c49738e9b9"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.686427 4656 generic.go:334] "Generic (PLEG): container finished" podID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerID="f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" exitCode=0 Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.686646 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.686602 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" event={"ID":"7d2dc2b5-f3f8-4be6-87ec-08e25875d581","Type":"ContainerDied","Data":"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.686720 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" event={"ID":"7d2dc2b5-f3f8-4be6-87ec-08e25875d581","Type":"ContainerDied","Data":"97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692325 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c967b544-dfmnj_986d7566-ac0b-4651-b2ea-3a839466d5be/route-controller-manager/0.log" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692389 4656 generic.go:334] "Generic (PLEG): container finished" podID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerID="07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" exitCode=255 Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692431 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" event={"ID":"986d7566-ac0b-4651-b2ea-3a839466d5be","Type":"ContainerDied","Data":"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692458 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" event={"ID":"986d7566-ac0b-4651-b2ea-3a839466d5be","Type":"ContainerDied","Data":"e1f2c684d015240031f9a01efcbd97b6607151f5966c4bcd7cb98bb3dac1e473"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692437 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.701979 4656 scope.go:117] "RemoveContainer" containerID="5b4233be7c7686331f99bc85bff4b844e11a65ed773671095a89af9d01cb614c" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.707700 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.712690 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.719022 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.721150 4656 scope.go:117] "RemoveContainer" containerID="1c47ef6d923405e3cbd5ebf98dc7b072df4b7f29ebc5c89b0c18a12b52617312" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.724213 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.732660 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.738307 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.746068 4656 scope.go:117] "RemoveContainer" containerID="98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.747728 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.754490 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.759219 4656 scope.go:117] "RemoveContainer" containerID="10ebf6db7ee2964bf2e7f1a9dc6ff72c579ec55214bcb19505e1b006f8bbcdba" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.775830 4656 scope.go:117] "RemoveContainer" containerID="7cabac754976a0ee49a3872bb0095429cd22f9300449f019606cabdd291f04c7" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.790005 4656 scope.go:117] "RemoveContainer" containerID="f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.803434 4656 scope.go:117] "RemoveContainer" containerID="f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" Jan 28 15:24:05 crc kubenswrapper[4656]: E0128 15:24:05.804152 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7\": container with ID starting with f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7 not found: ID does not exist" containerID="f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.804220 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7"} err="failed to get container status \"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7\": rpc error: code = NotFound desc = could not find container \"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7\": container with ID starting with f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7 not found: ID does not exist" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.804253 4656 scope.go:117] "RemoveContainer" containerID="07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.817000 4656 scope.go:117] "RemoveContainer" containerID="07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" Jan 28 15:24:05 crc kubenswrapper[4656]: E0128 15:24:05.817851 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53\": container with ID starting with 07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53 not found: ID does not exist" containerID="07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.817892 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53"} err="failed to get container status \"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53\": rpc error: code = NotFound desc = could not find container \"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53\": container with ID starting with 07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53 not found: ID does not exist" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.177608 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" path="/var/lib/kubelet/pods/7d2dc2b5-f3f8-4be6-87ec-08e25875d581/volumes" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.178330 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" path="/var/lib/kubelet/pods/986d7566-ac0b-4651-b2ea-3a839466d5be/volumes" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.178777 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" path="/var/lib/kubelet/pods/d4371d7c-f72d-4765-9101-34946d11d0e7/volumes" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.179328 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" path="/var/lib/kubelet/pods/e4ed5142-92c2-4f59-a383-f91999ce3dff/volumes" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.614658 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.614987 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615010 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615026 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="extract-utilities" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615032 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="extract-utilities" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615039 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="extract-content" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615045 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="extract-content" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615064 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615073 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615083 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615089 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615102 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="extract-utilities" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615107 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="extract-utilities" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615114 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="extract-content" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615119 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="extract-content" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615128 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerName="controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615134 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerName="controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615268 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerName="controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615287 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615306 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615317 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615833 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.618877 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r"] Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.619259 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.619731 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.619858 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.620118 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.620448 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.621454 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.621891 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.622721 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.623034 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.623307 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.623406 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.623729 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.631092 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.632044 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.636049 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.639535 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r"] Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.743814 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.743894 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-config\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.743949 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.743980 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-client-ca\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744005 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f93ed009-2548-461a-b6c8-e8cf2349fe79-serving-cert\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744027 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744042 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744071 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmjm\" (UniqueName: \"kubernetes.io/projected/f93ed009-2548-461a-b6c8-e8cf2349fe79-kube-api-access-xxmjm\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744090 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845690 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845767 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-client-ca\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845800 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f93ed009-2548-461a-b6c8-e8cf2349fe79-serving-cert\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845820 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845848 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845890 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmjm\" (UniqueName: \"kubernetes.io/projected/f93ed009-2548-461a-b6c8-e8cf2349fe79-kube-api-access-xxmjm\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845922 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845969 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.846001 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-config\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847205 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847288 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847639 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-config\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847642 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-client-ca\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847884 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.851844 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f93ed009-2548-461a-b6c8-e8cf2349fe79-serving-cert\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.852877 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.866862 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmjm\" (UniqueName: \"kubernetes.io/projected/f93ed009-2548-461a-b6c8-e8cf2349fe79-kube-api-access-xxmjm\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.875232 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.939716 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.947310 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.261618 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r"] Jan 28 15:24:08 crc kubenswrapper[4656]: W0128 15:24:08.291554 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf93ed009_2548_461a_b6c8_e8cf2349fe79.slice/crio-b81821f8d2073506400e20e4e2cb986218874ad389e86710a758010d96e477a3 WatchSource:0}: Error finding container b81821f8d2073506400e20e4e2cb986218874ad389e86710a758010d96e477a3: Status 404 returned error can't find the container with id b81821f8d2073506400e20e4e2cb986218874ad389e86710a758010d96e477a3 Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.483921 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.758810 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" event={"ID":"f93ed009-2548-461a-b6c8-e8cf2349fe79","Type":"ContainerStarted","Data":"f60bdb5890ca2ef0bdced1158621650808c2caf96eeaec77201ab6f46133d2e4"} Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.758866 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" event={"ID":"f93ed009-2548-461a-b6c8-e8cf2349fe79","Type":"ContainerStarted","Data":"b81821f8d2073506400e20e4e2cb986218874ad389e86710a758010d96e477a3"} Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.759222 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.761118 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" event={"ID":"ed0b9628-0338-430a-83bc-0d22461d043c","Type":"ContainerStarted","Data":"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2"} Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.761145 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" event={"ID":"ed0b9628-0338-430a-83bc-0d22461d043c","Type":"ContainerStarted","Data":"4bc6589f9eb7f25b6e2f6f6b82c40c2f6d98d2ea31224a268facb96b14b7970f"} Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.761802 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.766578 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.855839 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" podStartSLOduration=5.855810355 podStartE2EDuration="5.855810355s" podCreationTimestamp="2026-01-28 15:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:08.821642118 +0000 UTC m=+339.329812922" watchObservedRunningTime="2026-01-28 15:24:08.855810355 +0000 UTC m=+339.363981159" Jan 28 15:24:09 crc kubenswrapper[4656]: I0128 15:24:09.310065 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:09 crc kubenswrapper[4656]: I0128 15:24:09.332605 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" podStartSLOduration=6.332579848 podStartE2EDuration="6.332579848s" podCreationTimestamp="2026-01-28 15:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:08.873945528 +0000 UTC m=+339.382116342" watchObservedRunningTime="2026-01-28 15:24:09.332579848 +0000 UTC m=+339.840750672" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.184418 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.185281 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" containerName="controller-manager" containerID="cri-o://1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" gracePeriod=30 Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.663098 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.821267 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.821332 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.822337 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config" (OuterVolumeSpecName: "config") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.822407 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.822444 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.822768 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.823184 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.823584 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca" (OuterVolumeSpecName: "client-ca") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.823799 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.827311 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.828505 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9" (OuterVolumeSpecName: "kube-api-access-lzsm9") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "kube-api-access-lzsm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.833771 4656 generic.go:334] "Generic (PLEG): container finished" podID="ed0b9628-0338-430a-83bc-0d22461d043c" containerID="1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" exitCode=0 Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.833821 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" event={"ID":"ed0b9628-0338-430a-83bc-0d22461d043c","Type":"ContainerDied","Data":"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2"} Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.833877 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" event={"ID":"ed0b9628-0338-430a-83bc-0d22461d043c","Type":"ContainerDied","Data":"4bc6589f9eb7f25b6e2f6f6b82c40c2f6d98d2ea31224a268facb96b14b7970f"} Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.833897 4656 scope.go:117] "RemoveContainer" containerID="1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.834071 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.872659 4656 scope.go:117] "RemoveContainer" containerID="1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" Jan 28 15:24:21 crc kubenswrapper[4656]: E0128 15:24:21.873874 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2\": container with ID starting with 1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2 not found: ID does not exist" containerID="1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.873915 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2"} err="failed to get container status \"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2\": rpc error: code = NotFound desc = could not find container \"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2\": container with ID starting with 1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2 not found: ID does not exist" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.894757 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.921441 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.928718 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.928761 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.928774 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.928783 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.630569 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv"] Jan 28 15:24:22 crc kubenswrapper[4656]: E0128 15:24:22.631203 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" containerName="controller-manager" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.631245 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" containerName="controller-manager" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.631378 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" containerName="controller-manager" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.631868 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636038 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636218 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636562 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636759 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636655 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638267 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-proxy-ca-bundles\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638440 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e43dd75-be02-4340-baf4-028a9587f06f-serving-cert\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638582 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-client-ca\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrgm\" (UniqueName: \"kubernetes.io/projected/7e43dd75-be02-4340-baf4-028a9587f06f-kube-api-access-mzrgm\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638912 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-config\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638646 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.645254 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.647478 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv"] Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.739779 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-proxy-ca-bundles\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.740073 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e43dd75-be02-4340-baf4-028a9587f06f-serving-cert\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.740252 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-client-ca\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.740390 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzrgm\" (UniqueName: \"kubernetes.io/projected/7e43dd75-be02-4340-baf4-028a9587f06f-kube-api-access-mzrgm\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.740578 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-config\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.741038 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-proxy-ca-bundles\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.741674 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-client-ca\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.742227 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-config\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.761318 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e43dd75-be02-4340-baf4-028a9587f06f-serving-cert\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.762381 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzrgm\" (UniqueName: \"kubernetes.io/projected/7e43dd75-be02-4340-baf4-028a9587f06f-kube-api-access-mzrgm\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.949495 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.181021 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" path="/var/lib/kubelet/pods/ed0b9628-0338-430a-83bc-0d22461d043c/volumes" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.366685 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv"] Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.404191 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pml8w"] Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.405063 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.432976 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pml8w"] Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549266 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-certificates\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549433 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-trusted-ca\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549481 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-bound-sa-token\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549564 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549622 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549671 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549717 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-tls\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549801 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxjf\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-kube-api-access-sjxjf\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.570441 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.650784 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-certificates\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651621 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-trusted-ca\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651725 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-bound-sa-token\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651775 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651862 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651897 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-tls\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651933 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjxjf\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-kube-api-access-sjxjf\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.652547 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.652974 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-trusted-ca\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.653349 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-certificates\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.656667 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.656839 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-tls\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.677870 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-bound-sa-token\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.686035 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjxjf\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-kube-api-access-sjxjf\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.721369 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.860267 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" event={"ID":"7e43dd75-be02-4340-baf4-028a9587f06f","Type":"ContainerStarted","Data":"0cf4ac66d4b5a3a96e428436b7b84c537349cc6f183dd94033321f0a334c237a"} Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.860586 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" event={"ID":"7e43dd75-be02-4340-baf4-028a9587f06f","Type":"ContainerStarted","Data":"ec569fbc46cfdc5054fcc46ef0a44a9bbdf1e8242676628bd1b4ee1ac69f73e9"} Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.862662 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.868818 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.897480 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" podStartSLOduration=2.897461188 podStartE2EDuration="2.897461188s" podCreationTimestamp="2026-01-28 15:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:23.895994537 +0000 UTC m=+354.404165341" watchObservedRunningTime="2026-01-28 15:24:23.897461188 +0000 UTC m=+354.405631992" Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.009353 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pml8w"] Jan 28 15:24:24 crc kubenswrapper[4656]: W0128 15:24:24.015883 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c8e92c4_99e6_4a52_b35c_6d2cf9abba12.slice/crio-81e8df40c3f05e32ea69dace326b1b31ae272b669ab024b1c60a4ca80e8b5ea4 WatchSource:0}: Error finding container 81e8df40c3f05e32ea69dace326b1b31ae272b669ab024b1c60a4ca80e8b5ea4: Status 404 returned error can't find the container with id 81e8df40c3f05e32ea69dace326b1b31ae272b669ab024b1c60a4ca80e8b5ea4 Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.870512 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" event={"ID":"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12","Type":"ContainerStarted","Data":"a0a025077952ef29c41d64e6720f6371e9d5e48c59042d94854b562a3237262e"} Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.870989 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" event={"ID":"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12","Type":"ContainerStarted","Data":"81e8df40c3f05e32ea69dace326b1b31ae272b669ab024b1c60a4ca80e8b5ea4"} Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.871027 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.895467 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" podStartSLOduration=1.8954350899999999 podStartE2EDuration="1.89543509s" podCreationTimestamp="2026-01-28 15:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:24.89048555 +0000 UTC m=+355.398656354" watchObservedRunningTime="2026-01-28 15:24:24.89543509 +0000 UTC m=+355.403605894" Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.860139 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.860939 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gzr9v" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="registry-server" containerID="cri-o://891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.898277 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.899021 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w4vpf" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="registry-server" containerID="cri-o://10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.922494 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.922839 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" containerID="cri-o://38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.926482 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.927050 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nhqpx" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="registry-server" containerID="cri-o://78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.936197 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.936804 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8dc6j" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" containerID="cri-o://f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.945573 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xzq6"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.946718 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.965738 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xzq6"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.981438 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vscn\" (UniqueName: \"kubernetes.io/projected/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-kube-api-access-4vscn\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.981529 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.981609 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.082533 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vscn\" (UniqueName: \"kubernetes.io/projected/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-kube-api-access-4vscn\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.082573 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.082603 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.084958 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.094864 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.098325 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vscn\" (UniqueName: \"kubernetes.io/projected/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-kube-api-access-4vscn\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.260132 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.440889 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.583943 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.590391 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") pod \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.590488 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") pod \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.590529 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") pod \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.591500 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities" (OuterVolumeSpecName: "utilities") pod "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" (UID: "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.619493 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7" (OuterVolumeSpecName: "kube-api-access-ftms7") pod "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" (UID: "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9"). InnerVolumeSpecName "kube-api-access-ftms7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.693459 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" (UID: "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694014 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") pod \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694053 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") pod \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694098 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") pod \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694390 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694407 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694419 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.696523 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c7b09f99-0d13-49a0-8b8d-fc77915a171d" (UID: "c7b09f99-0d13-49a0-8b8d-fc77915a171d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.704777 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c7b09f99-0d13-49a0-8b8d-fc77915a171d" (UID: "c7b09f99-0d13-49a0-8b8d-fc77915a171d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.708336 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8" (OuterVolumeSpecName: "kube-api-access-7p5b8") pod "c7b09f99-0d13-49a0-8b8d-fc77915a171d" (UID: "c7b09f99-0d13-49a0-8b8d-fc77915a171d"). InnerVolumeSpecName "kube-api-access-7p5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.801842 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.801879 4656 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.801889 4656 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.872089 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.887697 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905318 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") pod \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905366 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") pod \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905404 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") pod \"7de9fc74-9948-4e73-ac93-25f9c22189ce\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905419 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") pod \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905439 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") pod \"7de9fc74-9948-4e73-ac93-25f9c22189ce\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905468 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") pod \"7de9fc74-9948-4e73-ac93-25f9c22189ce\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.906541 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities" (OuterVolumeSpecName: "utilities") pod "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" (UID: "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.906553 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities" (OuterVolumeSpecName: "utilities") pod "7de9fc74-9948-4e73-ac93-25f9c22189ce" (UID: "7de9fc74-9948-4e73-ac93-25f9c22189ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.908859 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj" (OuterVolumeSpecName: "kube-api-access-2d5jj") pod "7de9fc74-9948-4e73-ac93-25f9c22189ce" (UID: "7de9fc74-9948-4e73-ac93-25f9c22189ce"). InnerVolumeSpecName "kube-api-access-2d5jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.911307 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8" (OuterVolumeSpecName: "kube-api-access-rqjd8") pod "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" (UID: "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1"). InnerVolumeSpecName "kube-api-access-rqjd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.913671 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.949822 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xzq6"] Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.971078 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7de9fc74-9948-4e73-ac93-25f9c22189ce" (UID: "7de9fc74-9948-4e73-ac93-25f9c22189ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.977860 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" event={"ID":"c48ec7e7-12ff-47f6-9a82-59078f7c2b04","Type":"ContainerStarted","Data":"f72d4e2fef18c6315e67c1106c8d141421734775a783b48126b08563e54bb13f"} Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.989395 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" (UID: "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993411 4656 generic.go:334] "Generic (PLEG): container finished" podID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerID="f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" exitCode=0 Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993503 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerDied","Data":"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9"} Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993549 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerDied","Data":"4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f"} Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993601 4656 scope.go:117] "RemoveContainer" containerID="f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993780 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.007581 4656 generic.go:334] "Generic (PLEG): container finished" podID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerID="78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" exitCode=0 Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.007692 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerDied","Data":"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.007750 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerDied","Data":"cf973c02c04ef3c79efcf712d2d128bb8313be52f3235da8828930c75b3c34ff"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.007851 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.008473 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.008847 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.008954 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.009040 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.009143 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.009246 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.012548 4656 generic.go:334] "Generic (PLEG): container finished" podID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerID="10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" exitCode=0 Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.012700 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.013657 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerDied","Data":"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.013750 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerDied","Data":"cf818feb09c0ccfd9e455faaffde6799b01a2c05ed95767814d1627d35d8c054"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.021600 4656 generic.go:334] "Generic (PLEG): container finished" podID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerID="891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" exitCode=0 Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.021654 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerDied","Data":"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.021675 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerDied","Data":"c98421a3a0c42729a0b7a3850570dd8ac89ef8e57c178261115d715189f4f351"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.021730 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.027777 4656 generic.go:334] "Generic (PLEG): container finished" podID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerID="38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" exitCode=0 Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.027810 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerDied","Data":"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.027832 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerDied","Data":"93539b6344f80c28c33a800b6b17b3b195013261b9320da802d38ff972cee5ee"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.027876 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.058928 4656 scope.go:117] "RemoveContainer" containerID="f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.100854 4656 scope.go:117] "RemoveContainer" containerID="98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.109712 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") pod \"a6b1aae7-caaa-427d-8b07-705b02e81763\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.109745 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") pod \"a6b1aae7-caaa-427d-8b07-705b02e81763\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.109786 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") pod \"a6b1aae7-caaa-427d-8b07-705b02e81763\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.111127 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities" (OuterVolumeSpecName: "utilities") pod "a6b1aae7-caaa-427d-8b07-705b02e81763" (UID: "a6b1aae7-caaa-427d-8b07-705b02e81763"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.121445 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8" (OuterVolumeSpecName: "kube-api-access-zkjf8") pod "a6b1aae7-caaa-427d-8b07-705b02e81763" (UID: "a6b1aae7-caaa-427d-8b07-705b02e81763"). InnerVolumeSpecName "kube-api-access-zkjf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.123999 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.127763 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.137094 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.150407 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.154248 4656 scope.go:117] "RemoveContainer" containerID="f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.158399 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9\": container with ID starting with f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9 not found: ID does not exist" containerID="f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.158459 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9"} err="failed to get container status \"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9\": rpc error: code = NotFound desc = could not find container \"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9\": container with ID starting with f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.158501 4656 scope.go:117] "RemoveContainer" containerID="f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.163932 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f\": container with ID starting with f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f not found: ID does not exist" containerID="f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.163990 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f"} err="failed to get container status \"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f\": rpc error: code = NotFound desc = could not find container \"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f\": container with ID starting with f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.164023 4656 scope.go:117] "RemoveContainer" containerID="98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.167909 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388\": container with ID starting with 98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388 not found: ID does not exist" containerID="98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.167959 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388"} err="failed to get container status \"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388\": rpc error: code = NotFound desc = could not find container \"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388\": container with ID starting with 98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.167987 4656 scope.go:117] "RemoveContainer" containerID="78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.200385 4656 scope.go:117] "RemoveContainer" containerID="3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.203405 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" path="/var/lib/kubelet/pods/c7b09f99-0d13-49a0-8b8d-fc77915a171d/volumes" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.204218 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" path="/var/lib/kubelet/pods/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9/volumes" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.210367 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.210596 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.211748 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.211881 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.216372 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.234304 4656 scope.go:117] "RemoveContainer" containerID="daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.245363 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.257816 4656 scope.go:117] "RemoveContainer" containerID="78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.259821 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41\": container with ID starting with 78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41 not found: ID does not exist" containerID="78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.259868 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41"} err="failed to get container status \"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41\": rpc error: code = NotFound desc = could not find container \"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41\": container with ID starting with 78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.259901 4656 scope.go:117] "RemoveContainer" containerID="3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.260940 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b\": container with ID starting with 3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b not found: ID does not exist" containerID="3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.260992 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b"} err="failed to get container status \"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b\": rpc error: code = NotFound desc = could not find container \"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b\": container with ID starting with 3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.261030 4656 scope.go:117] "RemoveContainer" containerID="daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.261854 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050\": container with ID starting with daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050 not found: ID does not exist" containerID="daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.261890 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050"} err="failed to get container status \"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050\": rpc error: code = NotFound desc = could not find container \"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050\": container with ID starting with daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.261933 4656 scope.go:117] "RemoveContainer" containerID="10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.284193 4656 scope.go:117] "RemoveContainer" containerID="a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.313203 4656 scope.go:117] "RemoveContainer" containerID="142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352238 4656 scope.go:117] "RemoveContainer" containerID="10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.352585 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746\": container with ID starting with 10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746 not found: ID does not exist" containerID="10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352616 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746"} err="failed to get container status \"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746\": rpc error: code = NotFound desc = could not find container \"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746\": container with ID starting with 10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352638 4656 scope.go:117] "RemoveContainer" containerID="a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.352837 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a\": container with ID starting with a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a not found: ID does not exist" containerID="a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352894 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a"} err="failed to get container status \"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a\": rpc error: code = NotFound desc = could not find container \"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a\": container with ID starting with a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352918 4656 scope.go:117] "RemoveContainer" containerID="142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.353141 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8\": container with ID starting with 142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8 not found: ID does not exist" containerID="142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.353190 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8"} err="failed to get container status \"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8\": rpc error: code = NotFound desc = could not find container \"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8\": container with ID starting with 142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.353208 4656 scope.go:117] "RemoveContainer" containerID="891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.372395 4656 scope.go:117] "RemoveContainer" containerID="3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.387470 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6b1aae7-caaa-427d-8b07-705b02e81763" (UID: "a6b1aae7-caaa-427d-8b07-705b02e81763"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.402545 4656 scope.go:117] "RemoveContainer" containerID="6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.415466 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.421926 4656 scope.go:117] "RemoveContainer" containerID="891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.423353 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855\": container with ID starting with 891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855 not found: ID does not exist" containerID="891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.423399 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855"} err="failed to get container status \"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855\": rpc error: code = NotFound desc = could not find container \"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855\": container with ID starting with 891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.423431 4656 scope.go:117] "RemoveContainer" containerID="3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.423943 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f\": container with ID starting with 3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f not found: ID does not exist" containerID="3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.423969 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f"} err="failed to get container status \"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f\": rpc error: code = NotFound desc = could not find container \"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f\": container with ID starting with 3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.423987 4656 scope.go:117] "RemoveContainer" containerID="6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.424718 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a\": container with ID starting with 6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a not found: ID does not exist" containerID="6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.424752 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a"} err="failed to get container status \"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a\": rpc error: code = NotFound desc = could not find container \"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a\": container with ID starting with 6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.424773 4656 scope.go:117] "RemoveContainer" containerID="38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.441962 4656 scope.go:117] "RemoveContainer" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.456137 4656 scope.go:117] "RemoveContainer" containerID="38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.456591 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4\": container with ID starting with 38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4 not found: ID does not exist" containerID="38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.456624 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4"} err="failed to get container status \"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4\": rpc error: code = NotFound desc = could not find container \"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4\": container with ID starting with 38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.456653 4656 scope.go:117] "RemoveContainer" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.457168 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469\": container with ID starting with bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469 not found: ID does not exist" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.457201 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469"} err="failed to get container status \"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469\": rpc error: code = NotFound desc = could not find container \"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469\": container with ID starting with bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.620597 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.625386 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.033324 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" event={"ID":"c48ec7e7-12ff-47f6-9a82-59078f7c2b04","Type":"ContainerStarted","Data":"da32adc5e2701d55c784c0a58b5abc37943b2c44d237de786fdee1decca27137"} Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.033481 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.036530 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.058210 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" podStartSLOduration=3.058187791 podStartE2EDuration="3.058187791s" podCreationTimestamp="2026-01-28 15:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:42.0517855 +0000 UTC m=+372.559956314" watchObservedRunningTime="2026-01-28 15:24:42.058187791 +0000 UTC m=+372.566358595" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120466 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4gsg5"] Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120757 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120783 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120804 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120812 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120822 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120832 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120840 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120848 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120859 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120866 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120876 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120884 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120894 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120902 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120912 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120920 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120929 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120935 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120944 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120951 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120967 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120974 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120987 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120995 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.121008 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121016 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.121028 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121036 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121202 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121239 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121250 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121261 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121270 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121280 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.122028 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.132467 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.145787 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gsg5"] Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.229779 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-catalog-content\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.230330 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xsl\" (UniqueName: \"kubernetes.io/projected/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-kube-api-access-j7xsl\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.230435 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-utilities\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.332245 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-catalog-content\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.332339 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7xsl\" (UniqueName: \"kubernetes.io/projected/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-kube-api-access-j7xsl\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.332378 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-utilities\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.332897 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-utilities\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.333299 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-catalog-content\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.352622 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7xsl\" (UniqueName: \"kubernetes.io/projected/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-kube-api-access-j7xsl\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.438467 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.870890 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gsg5"] Jan 28 15:24:42 crc kubenswrapper[4656]: W0128 15:24:42.878024 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c5ec616_76e4_4b6f_93ce_ca2dba833b37.slice/crio-25ad831370b786636fc0bdeead5f59eaff3c46dd8afa92bf25aa6907e5ac639a WatchSource:0}: Error finding container 25ad831370b786636fc0bdeead5f59eaff3c46dd8afa92bf25aa6907e5ac639a: Status 404 returned error can't find the container with id 25ad831370b786636fc0bdeead5f59eaff3c46dd8afa92bf25aa6907e5ac639a Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.048255 4656 generic.go:334] "Generic (PLEG): container finished" podID="6c5ec616-76e4-4b6f-93ce-ca2dba833b37" containerID="31d9360eee16d936840bdbdf3c235ee6534bfc3ccba931d5ee21c5dcce82066a" exitCode=0 Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.048866 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerDied","Data":"31d9360eee16d936840bdbdf3c235ee6534bfc3ccba931d5ee21c5dcce82066a"} Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.048890 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerStarted","Data":"25ad831370b786636fc0bdeead5f59eaff3c46dd8afa92bf25aa6907e5ac639a"} Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.126842 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2b7pm"] Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.128269 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.132966 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.143751 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2b7pm"] Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.177932 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" path="/var/lib/kubelet/pods/7de9fc74-9948-4e73-ac93-25f9c22189ce/volumes" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.178520 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" path="/var/lib/kubelet/pods/a6b1aae7-caaa-427d-8b07-705b02e81763/volumes" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.179049 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" path="/var/lib/kubelet/pods/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1/volumes" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.260392 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xgqt\" (UniqueName: \"kubernetes.io/projected/816a03ab-31e5-4d9a-b66c-3787ac9335a9-kube-api-access-9xgqt\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.260635 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-utilities\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.260804 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-catalog-content\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.362501 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xgqt\" (UniqueName: \"kubernetes.io/projected/816a03ab-31e5-4d9a-b66c-3787ac9335a9-kube-api-access-9xgqt\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.362587 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-utilities\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.362621 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-catalog-content\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.363081 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-utilities\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.363148 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-catalog-content\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.384907 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xgqt\" (UniqueName: \"kubernetes.io/projected/816a03ab-31e5-4d9a-b66c-3787ac9335a9-kube-api-access-9xgqt\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.443092 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.728986 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.793560 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.877625 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2b7pm"] Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.055078 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerStarted","Data":"231f7f8aa55dbc91bccac98c0157e24f18349d8c2c0ca6b62358ce884d984dc3"} Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.057313 4656 generic.go:334] "Generic (PLEG): container finished" podID="816a03ab-31e5-4d9a-b66c-3787ac9335a9" containerID="859f6d41609f7f1bc4d4a0255c122a88748ed8e069d287aca7332cbc724fe2f3" exitCode=0 Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.058526 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2b7pm" event={"ID":"816a03ab-31e5-4d9a-b66c-3787ac9335a9","Type":"ContainerDied","Data":"859f6d41609f7f1bc4d4a0255c122a88748ed8e069d287aca7332cbc724fe2f3"} Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.058554 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2b7pm" event={"ID":"816a03ab-31e5-4d9a-b66c-3787ac9335a9","Type":"ContainerStarted","Data":"a89c3890ee1f2e08ca9f5dda5ec4062471d4aaa5528044e221f9a099ad0b9520"} Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.721326 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.722823 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.725549 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.740235 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.885188 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.885602 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.885826 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.993708 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.994071 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.994237 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.994666 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.994794 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.013814 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.038452 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.064786 4656 generic.go:334] "Generic (PLEG): container finished" podID="6c5ec616-76e4-4b6f-93ce-ca2dba833b37" containerID="231f7f8aa55dbc91bccac98c0157e24f18349d8c2c0ca6b62358ce884d984dc3" exitCode=0 Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.064843 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerDied","Data":"231f7f8aa55dbc91bccac98c0157e24f18349d8c2c0ca6b62358ce884d984dc3"} Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.449863 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:24:45 crc kubenswrapper[4656]: E0128 15:24:45.661662 4656 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea7644c9_f50c_43f8_8165_3fa375c3b9c0.slice/crio-conmon-df16b6a6ba81e2b7746a8de266aea33bfee9014a3f26a0a2bc636c8468b4da77.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.727242 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9zl7"] Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.728886 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.734639 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9zl7"] Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.734863 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.806501 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9trsc\" (UniqueName: \"kubernetes.io/projected/769c7d2c-1d96-4056-9165-ebf9a1cefc45-kube-api-access-9trsc\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.806550 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-utilities\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.806572 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-catalog-content\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.908316 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9trsc\" (UniqueName: \"kubernetes.io/projected/769c7d2c-1d96-4056-9165-ebf9a1cefc45-kube-api-access-9trsc\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.908368 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-utilities\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.908388 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-catalog-content\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.908847 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-catalog-content\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.909723 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-utilities\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.927135 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9trsc\" (UniqueName: \"kubernetes.io/projected/769c7d2c-1d96-4056-9165-ebf9a1cefc45-kube-api-access-9trsc\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.048037 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.071649 4656 generic.go:334] "Generic (PLEG): container finished" podID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerID="df16b6a6ba81e2b7746a8de266aea33bfee9014a3f26a0a2bc636c8468b4da77" exitCode=0 Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.072000 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerDied","Data":"df16b6a6ba81e2b7746a8de266aea33bfee9014a3f26a0a2bc636c8468b4da77"} Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.072054 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerStarted","Data":"2dd598bb76a09812057b8dc1896e04e054099bec7b15c382202025f6bc1dcb53"} Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.080660 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerStarted","Data":"4839ca45df2dfba34812412f8dc1889ac975c0e2b1dc2af364d6d46adfb09dda"} Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.083830 4656 generic.go:334] "Generic (PLEG): container finished" podID="816a03ab-31e5-4d9a-b66c-3787ac9335a9" containerID="b34bc4f99fbdbfc21e766ac0ed0908bd906792e88b39e74042bf0ebc3a8d2185" exitCode=0 Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.083871 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2b7pm" event={"ID":"816a03ab-31e5-4d9a-b66c-3787ac9335a9","Type":"ContainerDied","Data":"b34bc4f99fbdbfc21e766ac0ed0908bd906792e88b39e74042bf0ebc3a8d2185"} Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.120310 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4gsg5" podStartSLOduration=1.634000815 podStartE2EDuration="4.120284698s" podCreationTimestamp="2026-01-28 15:24:42 +0000 UTC" firstStartedPulling="2026-01-28 15:24:43.050798521 +0000 UTC m=+373.558969325" lastFinishedPulling="2026-01-28 15:24:45.537082404 +0000 UTC m=+376.045253208" observedRunningTime="2026-01-28 15:24:46.117502209 +0000 UTC m=+376.625673023" watchObservedRunningTime="2026-01-28 15:24:46.120284698 +0000 UTC m=+376.628455502" Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.598255 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9zl7"] Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.094297 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2b7pm" event={"ID":"816a03ab-31e5-4d9a-b66c-3787ac9335a9","Type":"ContainerStarted","Data":"983b27a00634327ed787b3afb9d077c285d644ffb6772c6c1133029a5b1be0ea"} Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.097341 4656 generic.go:334] "Generic (PLEG): container finished" podID="769c7d2c-1d96-4056-9165-ebf9a1cefc45" containerID="4e8fbb9400031706ec2a7786536678789e04a336c943221496437db3199921c5" exitCode=0 Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.098589 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerDied","Data":"4e8fbb9400031706ec2a7786536678789e04a336c943221496437db3199921c5"} Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.098625 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerStarted","Data":"9aeffcfa6316b99b0439b329440f64e8e84e2f6c230abd816febe3be888893f1"} Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.126006 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2b7pm" podStartSLOduration=1.627282456 podStartE2EDuration="4.125974749s" podCreationTimestamp="2026-01-28 15:24:43 +0000 UTC" firstStartedPulling="2026-01-28 15:24:44.059055925 +0000 UTC m=+374.567226729" lastFinishedPulling="2026-01-28 15:24:46.557748218 +0000 UTC m=+377.065919022" observedRunningTime="2026-01-28 15:24:47.121555224 +0000 UTC m=+377.629726038" watchObservedRunningTime="2026-01-28 15:24:47.125974749 +0000 UTC m=+377.634145543" Jan 28 15:24:48 crc kubenswrapper[4656]: I0128 15:24:48.109827 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerStarted","Data":"0447c1deb99eafea9e18abfe4ecf25bab580127d202a734a78186a7343a1aa00"} Jan 28 15:24:48 crc kubenswrapper[4656]: I0128 15:24:48.113911 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerStarted","Data":"98395829abb715f8be4df3366fdfa04876abca34d58e9c479a1d7e699b8ca848"} Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.121402 4656 generic.go:334] "Generic (PLEG): container finished" podID="769c7d2c-1d96-4056-9165-ebf9a1cefc45" containerID="0447c1deb99eafea9e18abfe4ecf25bab580127d202a734a78186a7343a1aa00" exitCode=0 Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.121679 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerDied","Data":"0447c1deb99eafea9e18abfe4ecf25bab580127d202a734a78186a7343a1aa00"} Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.127923 4656 generic.go:334] "Generic (PLEG): container finished" podID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerID="98395829abb715f8be4df3366fdfa04876abca34d58e9c479a1d7e699b8ca848" exitCode=0 Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.127965 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerDied","Data":"98395829abb715f8be4df3366fdfa04876abca34d58e9c479a1d7e699b8ca848"} Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.127994 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerStarted","Data":"7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a"} Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.162538 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gbjhq" podStartSLOduration=2.4401473989999998 podStartE2EDuration="5.162416841s" podCreationTimestamp="2026-01-28 15:24:44 +0000 UTC" firstStartedPulling="2026-01-28 15:24:46.076324493 +0000 UTC m=+376.584495297" lastFinishedPulling="2026-01-28 15:24:48.798593925 +0000 UTC m=+379.306764739" observedRunningTime="2026-01-28 15:24:49.159848489 +0000 UTC m=+379.668019313" watchObservedRunningTime="2026-01-28 15:24:49.162416841 +0000 UTC m=+379.670587645" Jan 28 15:24:50 crc kubenswrapper[4656]: I0128 15:24:50.138218 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerStarted","Data":"1c335d1f061f34b0509ecb7815ca45aaea1789e9f989b13d691a9e29d502d25d"} Jan 28 15:24:50 crc kubenswrapper[4656]: I0128 15:24:50.160984 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9zl7" podStartSLOduration=2.619369293 podStartE2EDuration="5.16096349s" podCreationTimestamp="2026-01-28 15:24:45 +0000 UTC" firstStartedPulling="2026-01-28 15:24:47.100621132 +0000 UTC m=+377.608791936" lastFinishedPulling="2026-01-28 15:24:49.642215299 +0000 UTC m=+380.150386133" observedRunningTime="2026-01-28 15:24:50.158078158 +0000 UTC m=+380.666248962" watchObservedRunningTime="2026-01-28 15:24:50.16096349 +0000 UTC m=+380.669134294" Jan 28 15:24:52 crc kubenswrapper[4656]: I0128 15:24:52.439805 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:52 crc kubenswrapper[4656]: I0128 15:24:52.440330 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:52 crc kubenswrapper[4656]: I0128 15:24:52.480374 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:53 crc kubenswrapper[4656]: I0128 15:24:53.193882 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:53 crc kubenswrapper[4656]: I0128 15:24:53.443510 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:53 crc kubenswrapper[4656]: I0128 15:24:53.444720 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:53 crc kubenswrapper[4656]: I0128 15:24:53.480077 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:54 crc kubenswrapper[4656]: I0128 15:24:54.198670 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:55 crc kubenswrapper[4656]: I0128 15:24:55.039353 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:55 crc kubenswrapper[4656]: I0128 15:24:55.039417 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:55 crc kubenswrapper[4656]: I0128 15:24:55.082259 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:55 crc kubenswrapper[4656]: I0128 15:24:55.199206 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:56 crc kubenswrapper[4656]: I0128 15:24:56.048811 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:56 crc kubenswrapper[4656]: I0128 15:24:56.049176 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:56 crc kubenswrapper[4656]: I0128 15:24:56.098360 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:56 crc kubenswrapper[4656]: I0128 15:24:56.236068 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:25:08 crc kubenswrapper[4656]: I0128 15:25:08.855091 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerName="registry" containerID="cri-o://e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967" gracePeriod=30 Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.270865 4656 generic.go:334] "Generic (PLEG): container finished" podID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerID="e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967" exitCode=0 Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.270945 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" event={"ID":"5823f5c7-fabe-4d4b-a3df-49349749b19e","Type":"ContainerDied","Data":"e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967"} Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.335892 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416246 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416293 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416330 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416384 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416552 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416598 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416870 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416895 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.418458 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.425763 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm" (OuterVolumeSpecName: "kube-api-access-bt6gm") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "kube-api-access-bt6gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.427812 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.429671 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.430044 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.434574 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.435578 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.445044 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518738 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518776 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518789 4656 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518800 4656 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518812 4656 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518821 4656 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518834 4656 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.281626 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" event={"ID":"5823f5c7-fabe-4d4b-a3df-49349749b19e","Type":"ContainerDied","Data":"54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327"} Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.281833 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.282044 4656 scope.go:117] "RemoveContainer" containerID="e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967" Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.343532 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.351850 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:25:11 crc kubenswrapper[4656]: I0128 15:25:11.182390 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" path="/var/lib/kubelet/pods/5823f5c7-fabe-4d4b-a3df-49349749b19e/volumes" Jan 28 15:25:11 crc kubenswrapper[4656]: I0128 15:25:11.264399 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:25:11 crc kubenswrapper[4656]: I0128 15:25:11.264494 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:25:41 crc kubenswrapper[4656]: I0128 15:25:41.264085 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:25:41 crc kubenswrapper[4656]: I0128 15:25:41.264612 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.264513 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.266012 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.266211 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.266938 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.267082 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d" gracePeriod=600 Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.708016 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d" exitCode=0 Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.708092 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d"} Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.708434 4656 scope.go:117] "RemoveContainer" containerID="a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1" Jan 28 15:26:12 crc kubenswrapper[4656]: I0128 15:26:12.717646 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1"} Jan 28 15:28:41 crc kubenswrapper[4656]: I0128 15:28:41.264864 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:28:41 crc kubenswrapper[4656]: I0128 15:28:41.265606 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:29:11 crc kubenswrapper[4656]: I0128 15:29:11.264545 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:29:11 crc kubenswrapper[4656]: I0128 15:29:11.265092 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.265352 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.266093 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.266197 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.266839 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.266896 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1" gracePeriod=600 Jan 28 15:29:42 crc kubenswrapper[4656]: I0128 15:29:42.092378 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1" exitCode=0 Jan 28 15:29:42 crc kubenswrapper[4656]: I0128 15:29:42.092437 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1"} Jan 28 15:29:42 crc kubenswrapper[4656]: I0128 15:29:42.092727 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0"} Jan 28 15:29:42 crc kubenswrapper[4656]: I0128 15:29:42.092786 4656 scope.go:117] "RemoveContainer" containerID="d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.250052 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 15:30:00 crc kubenswrapper[4656]: E0128 15:30:00.251610 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerName="registry" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.251640 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerName="registry" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.251844 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerName="registry" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.252469 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.255201 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.255389 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.268644 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.310750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.310837 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.311089 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.412668 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.412991 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.413084 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.414050 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.419973 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.430469 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.568450 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.890219 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 15:30:01 crc kubenswrapper[4656]: I0128 15:30:01.216731 4656 generic.go:334] "Generic (PLEG): container finished" podID="647c2e48-5d47-46f5-bd41-1512da5aef27" containerID="e478d6329d394a8ee6946b81a0192102dc331c4b0fad2b32a9549e2af991b8fd" exitCode=0 Jan 28 15:30:01 crc kubenswrapper[4656]: I0128 15:30:01.216787 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" event={"ID":"647c2e48-5d47-46f5-bd41-1512da5aef27","Type":"ContainerDied","Data":"e478d6329d394a8ee6946b81a0192102dc331c4b0fad2b32a9549e2af991b8fd"} Jan 28 15:30:01 crc kubenswrapper[4656]: I0128 15:30:01.216857 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" event={"ID":"647c2e48-5d47-46f5-bd41-1512da5aef27","Type":"ContainerStarted","Data":"6e1794ee00e769b0083dd072b9a68daa9637d06cf8f13a30e84b9a62689bf43f"} Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.424466 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.565500 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") pod \"647c2e48-5d47-46f5-bd41-1512da5aef27\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.565860 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") pod \"647c2e48-5d47-46f5-bd41-1512da5aef27\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.565953 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") pod \"647c2e48-5d47-46f5-bd41-1512da5aef27\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.566744 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume" (OuterVolumeSpecName: "config-volume") pod "647c2e48-5d47-46f5-bd41-1512da5aef27" (UID: "647c2e48-5d47-46f5-bd41-1512da5aef27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.570806 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w" (OuterVolumeSpecName: "kube-api-access-rfc9w") pod "647c2e48-5d47-46f5-bd41-1512da5aef27" (UID: "647c2e48-5d47-46f5-bd41-1512da5aef27"). InnerVolumeSpecName "kube-api-access-rfc9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.571553 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "647c2e48-5d47-46f5-bd41-1512da5aef27" (UID: "647c2e48-5d47-46f5-bd41-1512da5aef27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.668101 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.668153 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.668202 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:03 crc kubenswrapper[4656]: I0128 15:30:03.233550 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" event={"ID":"647c2e48-5d47-46f5-bd41-1512da5aef27","Type":"ContainerDied","Data":"6e1794ee00e769b0083dd072b9a68daa9637d06cf8f13a30e84b9a62689bf43f"} Jan 28 15:30:03 crc kubenswrapper[4656]: I0128 15:30:03.233610 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1794ee00e769b0083dd072b9a68daa9637d06cf8f13a30e84b9a62689bf43f" Jan 28 15:30:03 crc kubenswrapper[4656]: I0128 15:30:03.233619 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.234991 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j"] Jan 28 15:30:26 crc kubenswrapper[4656]: E0128 15:30:26.235782 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647c2e48-5d47-46f5-bd41-1512da5aef27" containerName="collect-profiles" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.235796 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="647c2e48-5d47-46f5-bd41-1512da5aef27" containerName="collect-profiles" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.235934 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="647c2e48-5d47-46f5-bd41-1512da5aef27" containerName="collect-profiles" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.236664 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.240063 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.240277 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.246920 4656 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hk547" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.256060 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-cpqlr"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.256971 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.260667 4656 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-t9z5x" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.271176 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jl8hn"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.272097 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.274063 4656 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wldpb" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.275231 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.279304 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cpqlr"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.288237 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jl8hn"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.433584 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh48s\" (UniqueName: \"kubernetes.io/projected/2afeb8d3-6acc-42ee-aa3d-943c0784354c-kube-api-access-rh48s\") pod \"cert-manager-webhook-687f57d79b-jl8hn\" (UID: \"2afeb8d3-6acc-42ee-aa3d-943c0784354c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.433952 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvfnz\" (UniqueName: \"kubernetes.io/projected/b66b5cb8-91c8-4122-b61a-d2f5f7815d26-kube-api-access-lvfnz\") pod \"cert-manager-858654f9db-cpqlr\" (UID: \"b66b5cb8-91c8-4122-b61a-d2f5f7815d26\") " pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.434050 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4ccb\" (UniqueName: \"kubernetes.io/projected/46693ecf-5a40-4182-aeed-7161923e4016-kube-api-access-m4ccb\") pod \"cert-manager-cainjector-cf98fcc89-8bd5j\" (UID: \"46693ecf-5a40-4182-aeed-7161923e4016\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.536927 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh48s\" (UniqueName: \"kubernetes.io/projected/2afeb8d3-6acc-42ee-aa3d-943c0784354c-kube-api-access-rh48s\") pod \"cert-manager-webhook-687f57d79b-jl8hn\" (UID: \"2afeb8d3-6acc-42ee-aa3d-943c0784354c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.537032 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvfnz\" (UniqueName: \"kubernetes.io/projected/b66b5cb8-91c8-4122-b61a-d2f5f7815d26-kube-api-access-lvfnz\") pod \"cert-manager-858654f9db-cpqlr\" (UID: \"b66b5cb8-91c8-4122-b61a-d2f5f7815d26\") " pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.537067 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4ccb\" (UniqueName: \"kubernetes.io/projected/46693ecf-5a40-4182-aeed-7161923e4016-kube-api-access-m4ccb\") pod \"cert-manager-cainjector-cf98fcc89-8bd5j\" (UID: \"46693ecf-5a40-4182-aeed-7161923e4016\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.569768 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4ccb\" (UniqueName: \"kubernetes.io/projected/46693ecf-5a40-4182-aeed-7161923e4016-kube-api-access-m4ccb\") pod \"cert-manager-cainjector-cf98fcc89-8bd5j\" (UID: \"46693ecf-5a40-4182-aeed-7161923e4016\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.571057 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh48s\" (UniqueName: \"kubernetes.io/projected/2afeb8d3-6acc-42ee-aa3d-943c0784354c-kube-api-access-rh48s\") pod \"cert-manager-webhook-687f57d79b-jl8hn\" (UID: \"2afeb8d3-6acc-42ee-aa3d-943c0784354c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.581690 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvfnz\" (UniqueName: \"kubernetes.io/projected/b66b5cb8-91c8-4122-b61a-d2f5f7815d26-kube-api-access-lvfnz\") pod \"cert-manager-858654f9db-cpqlr\" (UID: \"b66b5cb8-91c8-4122-b61a-d2f5f7815d26\") " pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.585841 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.793204 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jl8hn"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.806511 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.852623 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.876341 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.102187 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cpqlr"] Jan 28 15:30:27 crc kubenswrapper[4656]: W0128 15:30:27.105824 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb66b5cb8_91c8_4122_b61a_d2f5f7815d26.slice/crio-dd857eabfe083949997ea5d9d04ffbc4bc706cbcdbf93c82ee5ffe2cc47518f5 WatchSource:0}: Error finding container dd857eabfe083949997ea5d9d04ffbc4bc706cbcdbf93c82ee5ffe2cc47518f5: Status 404 returned error can't find the container with id dd857eabfe083949997ea5d9d04ffbc4bc706cbcdbf93c82ee5ffe2cc47518f5 Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.244082 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j"] Jan 28 15:30:27 crc kubenswrapper[4656]: W0128 15:30:27.247938 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46693ecf_5a40_4182_aeed_7161923e4016.slice/crio-08a4b62589684b3764525eef92b05232d6f86ab755dbbe409e015a45599208fa WatchSource:0}: Error finding container 08a4b62589684b3764525eef92b05232d6f86ab755dbbe409e015a45599208fa: Status 404 returned error can't find the container with id 08a4b62589684b3764525eef92b05232d6f86ab755dbbe409e015a45599208fa Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.365623 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" event={"ID":"2afeb8d3-6acc-42ee-aa3d-943c0784354c","Type":"ContainerStarted","Data":"4d3e29bd150054b57a1bd17a2ca53bb608c4260029a7226165461fd586af80a7"} Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.366985 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" event={"ID":"46693ecf-5a40-4182-aeed-7161923e4016","Type":"ContainerStarted","Data":"08a4b62589684b3764525eef92b05232d6f86ab755dbbe409e015a45599208fa"} Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.374191 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cpqlr" event={"ID":"b66b5cb8-91c8-4122-b61a-d2f5f7815d26","Type":"ContainerStarted","Data":"dd857eabfe083949997ea5d9d04ffbc4bc706cbcdbf93c82ee5ffe2cc47518f5"} Jan 28 15:30:31 crc kubenswrapper[4656]: I0128 15:30:31.406381 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" event={"ID":"2afeb8d3-6acc-42ee-aa3d-943c0784354c","Type":"ContainerStarted","Data":"957633209ab27696b4ffd0da4ec1a922b020e7182324be3d83b10cd883f55544"} Jan 28 15:30:31 crc kubenswrapper[4656]: I0128 15:30:31.406846 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:31 crc kubenswrapper[4656]: I0128 15:30:31.425974 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" podStartSLOduration=1.796957247 podStartE2EDuration="5.425941018s" podCreationTimestamp="2026-01-28 15:30:26 +0000 UTC" firstStartedPulling="2026-01-28 15:30:26.805997714 +0000 UTC m=+717.314168518" lastFinishedPulling="2026-01-28 15:30:30.434981485 +0000 UTC m=+720.943152289" observedRunningTime="2026-01-28 15:30:31.422358649 +0000 UTC m=+721.930529453" watchObservedRunningTime="2026-01-28 15:30:31.425941018 +0000 UTC m=+721.934111812" Jan 28 15:30:34 crc kubenswrapper[4656]: I0128 15:30:34.424812 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cpqlr" event={"ID":"b66b5cb8-91c8-4122-b61a-d2f5f7815d26","Type":"ContainerStarted","Data":"627cec4c46574dd778de34dedc17b66a7ca1b216d70d028aa2742bf86c25a020"} Jan 28 15:30:34 crc kubenswrapper[4656]: I0128 15:30:34.426050 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" event={"ID":"46693ecf-5a40-4182-aeed-7161923e4016","Type":"ContainerStarted","Data":"9fadf63e9cb3e630f28dcf28c87eb7218dbc2a851129949abcc79aba8394f5d9"} Jan 28 15:30:34 crc kubenswrapper[4656]: I0128 15:30:34.439138 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-cpqlr" podStartSLOduration=2.025193785 podStartE2EDuration="8.439117857s" podCreationTimestamp="2026-01-28 15:30:26 +0000 UTC" firstStartedPulling="2026-01-28 15:30:27.108596494 +0000 UTC m=+717.616767298" lastFinishedPulling="2026-01-28 15:30:33.522520576 +0000 UTC m=+724.030691370" observedRunningTime="2026-01-28 15:30:34.43778334 +0000 UTC m=+724.945954164" watchObservedRunningTime="2026-01-28 15:30:34.439117857 +0000 UTC m=+724.947288661" Jan 28 15:30:34 crc kubenswrapper[4656]: I0128 15:30:34.461491 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" podStartSLOduration=2.183081111 podStartE2EDuration="8.461469864s" podCreationTimestamp="2026-01-28 15:30:26 +0000 UTC" firstStartedPulling="2026-01-28 15:30:27.250289463 +0000 UTC m=+717.758460267" lastFinishedPulling="2026-01-28 15:30:33.528678216 +0000 UTC m=+724.036849020" observedRunningTime="2026-01-28 15:30:34.455977092 +0000 UTC m=+724.964147906" watchObservedRunningTime="2026-01-28 15:30:34.461469864 +0000 UTC m=+724.969640668" Jan 28 15:30:36 crc kubenswrapper[4656]: I0128 15:30:36.589659 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.944305 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kwnzt"] Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945291 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-controller" containerID="cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945332 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="sbdb" containerID="cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945436 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-node" containerID="cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945457 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-acl-logging" containerID="cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945499 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945551 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="northd" containerID="cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945331 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="nbdb" containerID="cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" gracePeriod=30 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.014791 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" containerID="cri-o://318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" gracePeriod=30 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.401630 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.403866 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovn-acl-logging/0.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.404382 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovn-controller/0.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.404834 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477358 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-th9gj"] Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477717 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-node" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477742 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-node" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477770 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477778 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477791 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477799 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477809 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477818 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477831 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="nbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477838 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="nbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477849 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477856 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477887 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477895 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477903 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="sbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477911 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="sbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477921 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477930 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477943 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="northd" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477950 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="northd" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477961 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-acl-logging" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477968 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-acl-logging" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477978 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kubecfg-setup" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477985 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kubecfg-setup" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478141 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="sbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478199 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478214 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478223 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="northd" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478233 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-acl-logging" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478244 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478254 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478262 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478272 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478280 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="nbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478288 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-node" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.478399 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478409 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478519 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.480409 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553593 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553702 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553749 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553816 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553851 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553840 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553884 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553937 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash" (OuterVolumeSpecName: "host-slash") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554038 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554088 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554123 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554153 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554224 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554265 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554304 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554326 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554369 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554394 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554427 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554452 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554483 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554512 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554843 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554883 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554926 4656 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554943 4656 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554981 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554991 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555055 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555054 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log" (OuterVolumeSpecName: "node-log") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555080 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555095 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket" (OuterVolumeSpecName: "log-socket") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555098 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555140 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555146 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555196 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555202 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555388 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555424 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.559811 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2" (OuterVolumeSpecName: "kube-api-access-68qp2") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "kube-api-access-68qp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.560660 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.566120 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/2.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.566714 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/1.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.566799 4656 generic.go:334] "Generic (PLEG): container finished" podID="7662a84d-d9cb-4684-b76f-c63ffeff8344" containerID="34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd" exitCode=2 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.566870 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerDied","Data":"34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.567357 4656 scope.go:117] "RemoveContainer" containerID="c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.568111 4656 scope.go:117] "RemoveContainer" containerID="34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.568431 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rpzjg_openshift-multus(7662a84d-d9cb-4684-b76f-c63ffeff8344)\"" pod="openshift-multus/multus-rpzjg" podUID="7662a84d-d9cb-4684-b76f-c63ffeff8344" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.570244 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.573945 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovn-acl-logging/0.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.574568 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovn-controller/0.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.574900 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575001 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575075 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575145 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575229 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575297 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575370 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" exitCode=143 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575466 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" exitCode=143 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575526 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575563 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576347 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576413 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576478 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576542 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576596 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576671 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576741 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576799 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576845 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576893 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576945 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576992 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577037 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577177 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577234 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577290 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577345 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577395 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577439 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577487 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577544 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577591 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577638 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577683 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577730 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577805 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577894 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577979 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578055 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578122 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578220 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578281 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578354 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578407 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578472 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578550 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578608 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578686 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"99f3f58926f9f5145244dbe6e9acfd081f57a6d5e67d0fa71fb1124101e0bee2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578763 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578837 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578918 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578992 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579063 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579130 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579206 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579263 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579319 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579363 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578474 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.617906 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.633004 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.648234 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656820 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-ovn\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656884 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656908 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656928 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-script-lib\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656950 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-var-lib-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656967 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-env-overrides\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657412 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-kubelet\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657463 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-log-socket\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657481 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-bin\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657497 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-netns\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657807 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-slash\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657923 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x64t\" (UniqueName: \"kubernetes.io/projected/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-kube-api-access-8x64t\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657988 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-systemd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658033 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-config\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658057 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-node-log\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658111 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658141 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-systemd-units\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658190 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658232 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-etc-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658252 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-netd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658336 4656 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658360 4656 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658373 4656 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658383 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658392 4656 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658401 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658410 4656 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658419 4656 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658427 4656 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658435 4656 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658445 4656 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658464 4656 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658475 4656 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658483 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658492 4656 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658501 4656 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658510 4656 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658519 4656 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.663452 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.681216 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.697276 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.709891 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.722916 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.736267 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.749790 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.758917 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-slash\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.758968 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x64t\" (UniqueName: \"kubernetes.io/projected/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-kube-api-access-8x64t\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.758996 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-systemd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759023 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-config\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759045 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-node-log\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759076 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759078 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-slash\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759086 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-systemd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759126 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-systemd-units\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759097 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-systemd-units\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759133 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-node-log\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759769 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-config\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760313 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760364 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-etc-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760389 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-netd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760413 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-ovn\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760427 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-etc-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760445 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-netd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760472 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-ovn\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760411 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760481 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760438 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760549 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760586 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-script-lib\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760613 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-var-lib-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760699 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-env-overrides\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760732 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-kubelet\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760781 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-log-socket\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760821 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-bin\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760848 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-netns\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760978 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-netns\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761014 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761258 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-kubelet\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761343 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-var-lib-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761392 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-bin\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761643 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-script-lib\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761907 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-log-socket\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.762628 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-env-overrides\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.762856 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.767140 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.767762 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.767799 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} err="failed to get container status \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.767828 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.768348 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.768369 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} err="failed to get container status \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.768382 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.768639 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.768658 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} err="failed to get container status \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.768674 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.769101 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.769139 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} err="failed to get container status \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.769214 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.769718 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.769744 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} err="failed to get container status \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.769764 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.770146 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770185 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} err="failed to get container status \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770198 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.770504 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770519 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} err="failed to get container status \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770532 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.770752 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770770 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} err="failed to get container status \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770782 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.771079 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771096 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} err="failed to get container status \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771111 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.771452 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771473 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} err="failed to get container status \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771485 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771709 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} err="failed to get container status \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771725 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772064 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} err="failed to get container status \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772082 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772338 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} err="failed to get container status \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772365 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772670 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} err="failed to get container status \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772695 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772892 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} err="failed to get container status \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772907 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.773108 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} err="failed to get container status \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.773127 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.775339 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} err="failed to get container status \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.775372 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.775828 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} err="failed to get container status \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.775887 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.776253 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} err="failed to get container status \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.776281 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.776577 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} err="failed to get container status \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.776603 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.777393 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x64t\" (UniqueName: \"kubernetes.io/projected/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-kube-api-access-8x64t\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.777821 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} err="failed to get container status \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.777848 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778186 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} err="failed to get container status \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778224 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778515 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} err="failed to get container status \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778542 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778843 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} err="failed to get container status \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778869 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779247 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} err="failed to get container status \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779267 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779543 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} err="failed to get container status \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779568 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779823 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} err="failed to get container status \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779847 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.780334 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} err="failed to get container status \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.780359 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.780634 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} err="failed to get container status \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.780662 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781275 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} err="failed to get container status \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781303 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781615 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} err="failed to get container status \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781640 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781850 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} err="failed to get container status \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781871 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782206 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} err="failed to get container status \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782225 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782548 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} err="failed to get container status \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782567 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782802 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} err="failed to get container status \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782829 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783154 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} err="failed to get container status \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783255 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783622 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} err="failed to get container status \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783644 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783994 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} err="failed to get container status \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.784014 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.784276 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} err="failed to get container status \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.784300 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.784591 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} err="failed to get container status \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.798957 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: W0128 15:30:49.818015 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod288efb1a_43ee_454e_8e5b_9a54bf7ceb3e.slice/crio-d9c0c98e8cf918cf57030c73bc0ec719174d7450d3540efe33d699ded93c2148 WatchSource:0}: Error finding container d9c0c98e8cf918cf57030c73bc0ec719174d7450d3540efe33d699ded93c2148: Status 404 returned error can't find the container with id d9c0c98e8cf918cf57030c73bc0ec719174d7450d3540efe33d699ded93c2148 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.925507 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kwnzt"] Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.931953 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kwnzt"] Jan 28 15:30:50 crc kubenswrapper[4656]: I0128 15:30:50.585862 4656 generic.go:334] "Generic (PLEG): container finished" podID="288efb1a-43ee-454e-8e5b-9a54bf7ceb3e" containerID="193609b139dfaf3762a1d2c25204365ccd8cc3f4584d6d79176fc0232ec1d674" exitCode=0 Jan 28 15:30:50 crc kubenswrapper[4656]: I0128 15:30:50.585959 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerDied","Data":"193609b139dfaf3762a1d2c25204365ccd8cc3f4584d6d79176fc0232ec1d674"} Jan 28 15:30:50 crc kubenswrapper[4656]: I0128 15:30:50.586322 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"d9c0c98e8cf918cf57030c73bc0ec719174d7450d3540efe33d699ded93c2148"} Jan 28 15:30:50 crc kubenswrapper[4656]: I0128 15:30:50.591641 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/2.log" Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.177680 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" path="/var/lib/kubelet/pods/5748c84b-daec-4bf0-bda9-180d379ab075/volumes" Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599825 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"894ecc8b54f498c6442f75ee0ee117f9d2f807bbea2aa9544c7bef46735db625"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599890 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"2d3503edaf413cefe3a9d08b478b707e744fce610d4d42bf9485c85c0e866cdf"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599904 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"d0499e46e6c9d828eae35a9a8b560d686e6a404557eb8a768129bdd950f87624"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599917 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"16ea70d1af063790e9a0f480da320df1c28849f2ceb8f375a1d5b1620b47afeb"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599931 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"38bc9e9cfefae341196e2052015d7707b654ab3a782f98e088c8b0f5a4f7faf0"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599944 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"a62e7e80a9e6f4f7e644b10b1bb3292e1266074ce6edd5f9402f8c2bed4d45e5"} Jan 28 15:30:54 crc kubenswrapper[4656]: I0128 15:30:54.623335 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"1415e63dd0aa4812b1040ef39795c9209f611e6b1e4fafbccda1602a4df570ed"} Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.637459 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"481d9714377eeec0f9905ed51981b2db12a950371fb68f7fc58629b246538b6e"} Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.638825 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.638858 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.638907 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.669645 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.670311 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.694932 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" podStartSLOduration=7.694796706 podStartE2EDuration="7.694796706s" podCreationTimestamp="2026-01-28 15:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:56.673034836 +0000 UTC m=+747.181205640" watchObservedRunningTime="2026-01-28 15:30:56.694796706 +0000 UTC m=+747.202967510" Jan 28 15:31:01 crc kubenswrapper[4656]: I0128 15:31:01.172343 4656 scope.go:117] "RemoveContainer" containerID="34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd" Jan 28 15:31:01 crc kubenswrapper[4656]: E0128 15:31:01.173026 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rpzjg_openshift-multus(7662a84d-d9cb-4684-b76f-c63ffeff8344)\"" pod="openshift-multus/multus-rpzjg" podUID="7662a84d-d9cb-4684-b76f-c63ffeff8344" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.214225 4656 scope.go:117] "RemoveContainer" containerID="34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.760429 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/2.log" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.760839 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"1f2d28087e84390abd2da79385ecb0e570d437b9df6c5df422bd2d41b479a1c4"} Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.901479 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj"] Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.903134 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.911044 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.920255 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.920337 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.920385 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.967122 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj"] Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.027962 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.028045 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.028092 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.028724 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.028912 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.058182 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.219342 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.240383 4656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(8e1f6149b059e645054740cdf2a00a19d5559bd7bdc9c1a5b135183bf89a2b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.240540 4656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(8e1f6149b059e645054740cdf2a00a19d5559bd7bdc9c1a5b135183bf89a2b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.240586 4656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(8e1f6149b059e645054740cdf2a00a19d5559bd7bdc9c1a5b135183bf89a2b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.240678 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace(dafc02bf-d18b-4177-afa6-ac17360b54e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace(dafc02bf-d18b-4177-afa6-ac17360b54e9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(8e1f6149b059e645054740cdf2a00a19d5559bd7bdc9c1a5b135183bf89a2b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.775786 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.777688 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.797502 4656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(208b588a364c1a1011bebae67a2b4cbde214f44c676496dfc52db1dabb1b8907): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.797595 4656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(208b588a364c1a1011bebae67a2b4cbde214f44c676496dfc52db1dabb1b8907): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.797626 4656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(208b588a364c1a1011bebae67a2b4cbde214f44c676496dfc52db1dabb1b8907): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.797689 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace(dafc02bf-d18b-4177-afa6-ac17360b54e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace(dafc02bf-d18b-4177-afa6-ac17360b54e9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(208b588a364c1a1011bebae67a2b4cbde214f44c676496dfc52db1dabb1b8907): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" Jan 28 15:31:19 crc kubenswrapper[4656]: I0128 15:31:19.824883 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.170725 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.171750 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.380472 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj"] Jan 28 15:31:29 crc kubenswrapper[4656]: W0128 15:31:29.386732 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddafc02bf_d18b_4177_afa6_ac17360b54e9.slice/crio-63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4 WatchSource:0}: Error finding container 63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4: Status 404 returned error can't find the container with id 63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4 Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.852766 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerStarted","Data":"b23e46dcc4394338299dd14630426ac5043a4c1179cca00604ce955a0eb5ef96"} Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.852819 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerStarted","Data":"63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4"} Jan 28 15:31:30 crc kubenswrapper[4656]: I0128 15:31:30.859582 4656 generic.go:334] "Generic (PLEG): container finished" podID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerID="b23e46dcc4394338299dd14630426ac5043a4c1179cca00604ce955a0eb5ef96" exitCode=0 Jan 28 15:31:30 crc kubenswrapper[4656]: I0128 15:31:30.859637 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerDied","Data":"b23e46dcc4394338299dd14630426ac5043a4c1179cca00604ce955a0eb5ef96"} Jan 28 15:31:33 crc kubenswrapper[4656]: I0128 15:31:33.881994 4656 generic.go:334] "Generic (PLEG): container finished" podID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerID="8b096fce66adb8bf76e69883ec65d094814540117aac1a98d3780111e5587f31" exitCode=0 Jan 28 15:31:33 crc kubenswrapper[4656]: I0128 15:31:33.882311 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerDied","Data":"8b096fce66adb8bf76e69883ec65d094814540117aac1a98d3780111e5587f31"} Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.305476 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.307057 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.318732 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.444903 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.444989 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.445047 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.545864 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.545947 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.545996 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.546548 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.546599 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.576407 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.623106 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.890075 4656 generic.go:334] "Generic (PLEG): container finished" podID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerID="8c93d298af03c828c523d0a9a316b29a5a13f68151396a3ee4c3f297236cd348" exitCode=0 Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.890124 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerDied","Data":"8c93d298af03c828c523d0a9a316b29a5a13f68151396a3ee4c3f297236cd348"} Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.905234 4656 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.909433 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:34 crc kubenswrapper[4656]: W0128 15:31:34.915477 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f0ce7c_11c5_4334_b92c_ddae4644eafd.slice/crio-eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5 WatchSource:0}: Error finding container eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5: Status 404 returned error can't find the container with id eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5 Jan 28 15:31:35 crc kubenswrapper[4656]: I0128 15:31:35.896151 4656 generic.go:334] "Generic (PLEG): container finished" podID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerID="1c7cf96542fe516bc05bc2a374b423dd9244a24b1fc85dce4401caa1a519df5e" exitCode=0 Jan 28 15:31:35 crc kubenswrapper[4656]: I0128 15:31:35.896392 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerDied","Data":"1c7cf96542fe516bc05bc2a374b423dd9244a24b1fc85dce4401caa1a519df5e"} Jan 28 15:31:35 crc kubenswrapper[4656]: I0128 15:31:35.896419 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerStarted","Data":"eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5"} Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.108327 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.175218 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") pod \"dafc02bf-d18b-4177-afa6-ac17360b54e9\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.175323 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") pod \"dafc02bf-d18b-4177-afa6-ac17360b54e9\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.175357 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") pod \"dafc02bf-d18b-4177-afa6-ac17360b54e9\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.176185 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle" (OuterVolumeSpecName: "bundle") pod "dafc02bf-d18b-4177-afa6-ac17360b54e9" (UID: "dafc02bf-d18b-4177-afa6-ac17360b54e9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.180627 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw" (OuterVolumeSpecName: "kube-api-access-9jhjw") pod "dafc02bf-d18b-4177-afa6-ac17360b54e9" (UID: "dafc02bf-d18b-4177-afa6-ac17360b54e9"). InnerVolumeSpecName "kube-api-access-9jhjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.186777 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util" (OuterVolumeSpecName: "util") pod "dafc02bf-d18b-4177-afa6-ac17360b54e9" (UID: "dafc02bf-d18b-4177-afa6-ac17360b54e9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.277071 4656 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.277150 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.277189 4656 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.905270 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerDied","Data":"63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4"} Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.905359 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.906769 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:37 crc kubenswrapper[4656]: I0128 15:31:37.912092 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerStarted","Data":"dd449c48f558ae08f3c1e6725aa8ce1a5913a7b3d5c5a25666c90cd1deaf89cc"} Jan 28 15:31:38 crc kubenswrapper[4656]: I0128 15:31:38.919924 4656 generic.go:334] "Generic (PLEG): container finished" podID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerID="dd449c48f558ae08f3c1e6725aa8ce1a5913a7b3d5c5a25666c90cd1deaf89cc" exitCode=0 Jan 28 15:31:38 crc kubenswrapper[4656]: I0128 15:31:38.919971 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerDied","Data":"dd449c48f558ae08f3c1e6725aa8ce1a5913a7b3d5c5a25666c90cd1deaf89cc"} Jan 28 15:31:39 crc kubenswrapper[4656]: I0128 15:31:39.927450 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerStarted","Data":"678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2"} Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.796717 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v8b7s" podStartSLOduration=4.053362416 podStartE2EDuration="6.796697199s" podCreationTimestamp="2026-01-28 15:31:34 +0000 UTC" firstStartedPulling="2026-01-28 15:31:36.909210944 +0000 UTC m=+787.417381748" lastFinishedPulling="2026-01-28 15:31:39.652545727 +0000 UTC m=+790.160716531" observedRunningTime="2026-01-28 15:31:39.947101786 +0000 UTC m=+790.455272600" watchObservedRunningTime="2026-01-28 15:31:40.796697199 +0000 UTC m=+791.304868003" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797258 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bvtmh"] Jan 28 15:31:40 crc kubenswrapper[4656]: E0128 15:31:40.797474 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="util" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797497 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="util" Jan 28 15:31:40 crc kubenswrapper[4656]: E0128 15:31:40.797517 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="pull" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797523 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="pull" Jan 28 15:31:40 crc kubenswrapper[4656]: E0128 15:31:40.797533 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="extract" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797541 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="extract" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797655 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="extract" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.798190 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.801087 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.801542 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.802060 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9r959" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.838600 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvdw\" (UniqueName: \"kubernetes.io/projected/66b69ecf-d4cf-452c-a311-93cecb247ab1-kube-api-access-4zvdw\") pod \"nmstate-operator-646758c888-bvtmh\" (UID: \"66b69ecf-d4cf-452c-a311-93cecb247ab1\") " pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.864237 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bvtmh"] Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.940680 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvdw\" (UniqueName: \"kubernetes.io/projected/66b69ecf-d4cf-452c-a311-93cecb247ab1-kube-api-access-4zvdw\") pod \"nmstate-operator-646758c888-bvtmh\" (UID: \"66b69ecf-d4cf-452c-a311-93cecb247ab1\") " pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.964053 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvdw\" (UniqueName: \"kubernetes.io/projected/66b69ecf-d4cf-452c-a311-93cecb247ab1-kube-api-access-4zvdw\") pod \"nmstate-operator-646758c888-bvtmh\" (UID: \"66b69ecf-d4cf-452c-a311-93cecb247ab1\") " pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:41 crc kubenswrapper[4656]: I0128 15:31:41.115537 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:41 crc kubenswrapper[4656]: I0128 15:31:41.352531 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bvtmh"] Jan 28 15:31:41 crc kubenswrapper[4656]: I0128 15:31:41.965514 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" event={"ID":"66b69ecf-d4cf-452c-a311-93cecb247ab1","Type":"ContainerStarted","Data":"a4610394493c7322c88c17b7675c7aca529bb9ec8951590d9f33069e4c0b03f4"} Jan 28 15:31:44 crc kubenswrapper[4656]: I0128 15:31:44.624287 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:44 crc kubenswrapper[4656]: I0128 15:31:44.625648 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:45 crc kubenswrapper[4656]: I0128 15:31:45.664323 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v8b7s" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" probeResult="failure" output=< Jan 28 15:31:45 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:31:45 crc kubenswrapper[4656]: > Jan 28 15:31:45 crc kubenswrapper[4656]: I0128 15:31:45.995415 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" event={"ID":"66b69ecf-d4cf-452c-a311-93cecb247ab1","Type":"ContainerStarted","Data":"e4b05892bf48c58652d437ed4aba96619d96b932ccc7a58dfa14869704a04182"} Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.023114 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" podStartSLOduration=2.410347537 podStartE2EDuration="6.023086424s" podCreationTimestamp="2026-01-28 15:31:40 +0000 UTC" firstStartedPulling="2026-01-28 15:31:41.359508898 +0000 UTC m=+791.867679702" lastFinishedPulling="2026-01-28 15:31:44.972247785 +0000 UTC m=+795.480418589" observedRunningTime="2026-01-28 15:31:46.016155985 +0000 UTC m=+796.524344169" watchObservedRunningTime="2026-01-28 15:31:46.023086424 +0000 UTC m=+796.531257228" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.930835 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xjdms"] Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.932107 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.936421 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x"] Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.937207 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.941635 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.942585 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mgc4r" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.999130 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-rjhhj"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.000041 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.004272 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xjdms"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.012495 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.090060 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7lb\" (UniqueName: \"kubernetes.io/projected/0b2d8d4a-d2ba-4c29-b545-f23070527595-kube-api-access-wm7lb\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.090126 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvnfb\" (UniqueName: \"kubernetes.io/projected/59adf6ef-2655-44f4-ae3d-91c315439598-kube-api-access-pvnfb\") pod \"nmstate-metrics-54757c584b-xjdms\" (UID: \"59adf6ef-2655-44f4-ae3d-91c315439598\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.090225 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191551 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-dbus-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191628 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm7lb\" (UniqueName: \"kubernetes.io/projected/0b2d8d4a-d2ba-4c29-b545-f23070527595-kube-api-access-wm7lb\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191799 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvnfb\" (UniqueName: \"kubernetes.io/projected/59adf6ef-2655-44f4-ae3d-91c315439598-kube-api-access-pvnfb\") pod \"nmstate-metrics-54757c584b-xjdms\" (UID: \"59adf6ef-2655-44f4-ae3d-91c315439598\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191868 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-nmstate-lock\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191960 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.192112 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c957l\" (UniqueName: \"kubernetes.io/projected/41fa0969-44fb-4cf4-916c-da0dd393a58c-kube-api-access-c957l\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: E0128 15:31:47.192110 4656 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.192177 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-ovs-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: E0128 15:31:47.192277 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair podName:0b2d8d4a-d2ba-4c29-b545-f23070527595 nodeName:}" failed. No retries permitted until 2026-01-28 15:31:47.692228364 +0000 UTC m=+798.200399168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-rrl4x" (UID: "0b2d8d4a-d2ba-4c29-b545-f23070527595") : secret "openshift-nmstate-webhook" not found Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.223241 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm7lb\" (UniqueName: \"kubernetes.io/projected/0b2d8d4a-d2ba-4c29-b545-f23070527595-kube-api-access-wm7lb\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.234689 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvnfb\" (UniqueName: \"kubernetes.io/projected/59adf6ef-2655-44f4-ae3d-91c315439598-kube-api-access-pvnfb\") pod \"nmstate-metrics-54757c584b-xjdms\" (UID: \"59adf6ef-2655-44f4-ae3d-91c315439598\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.253776 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.278136 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.279004 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.285648 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.285953 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hng9h" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294020 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c957l\" (UniqueName: \"kubernetes.io/projected/41fa0969-44fb-4cf4-916c-da0dd393a58c-kube-api-access-c957l\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294071 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-ovs-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294113 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-dbus-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294147 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-nmstate-lock\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294262 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-nmstate-lock\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294551 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-ovs-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.295454 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-dbus-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.296736 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.321870 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.326794 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c957l\" (UniqueName: \"kubernetes.io/projected/41fa0969-44fb-4cf4-916c-da0dd393a58c-kube-api-access-c957l\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.397918 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtqc\" (UniqueName: \"kubernetes.io/projected/c74f938e-5184-4ea3-afe5-373ef61c779a-kube-api-access-vrtqc\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.397976 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.398060 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c74f938e-5184-4ea3-afe5-373ef61c779a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.474454 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5567b6b6fc-hb2cx"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.475842 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.489665 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5567b6b6fc-hb2cx"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.504930 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtqc\" (UniqueName: \"kubernetes.io/projected/c74f938e-5184-4ea3-afe5-373ef61c779a-kube-api-access-vrtqc\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.505094 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.505291 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c74f938e-5184-4ea3-afe5-373ef61c779a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: E0128 15:31:47.513044 4656 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 15:31:47 crc kubenswrapper[4656]: E0128 15:31:47.513178 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert podName:c74f938e-5184-4ea3-afe5-373ef61c779a nodeName:}" failed. No retries permitted until 2026-01-28 15:31:48.013123369 +0000 UTC m=+798.521294183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-2bhpr" (UID: "c74f938e-5184-4ea3-afe5-373ef61c779a") : secret "plugin-serving-cert" not found Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.513892 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c74f938e-5184-4ea3-afe5-373ef61c779a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.537677 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtqc\" (UniqueName: \"kubernetes.io/projected/c74f938e-5184-4ea3-afe5-373ef61c779a-kube-api-access-vrtqc\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615221 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-service-ca\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615293 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-oauth-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615324 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fkz8\" (UniqueName: \"kubernetes.io/projected/b7c707ef-4b20-402f-9a10-d982dd0f59dd-kube-api-access-8fkz8\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615341 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-trusted-ca-bundle\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615358 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615380 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615404 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-oauth-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.623316 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.625609 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xjdms"] Jan 28 15:31:47 crc kubenswrapper[4656]: W0128 15:31:47.659207 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41fa0969_44fb_4cf4_916c_da0dd393a58c.slice/crio-eed67e51a9408ec961702fc368dbf4a559930f33c336cc7a646315d69ef4e78b WatchSource:0}: Error finding container eed67e51a9408ec961702fc368dbf4a559930f33c336cc7a646315d69ef4e78b: Status 404 returned error can't find the container with id eed67e51a9408ec961702fc368dbf4a559930f33c336cc7a646315d69ef4e78b Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.716861 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.716926 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-service-ca\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.716994 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-oauth-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717021 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fkz8\" (UniqueName: \"kubernetes.io/projected/b7c707ef-4b20-402f-9a10-d982dd0f59dd-kube-api-access-8fkz8\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717045 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-trusted-ca-bundle\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717075 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717116 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717155 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-oauth-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.718050 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-oauth-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.718116 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-service-ca\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.718881 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.719347 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-trusted-ca-bundle\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.723659 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.723697 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-oauth-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.723748 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.737047 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fkz8\" (UniqueName: \"kubernetes.io/projected/b7c707ef-4b20-402f-9a10-d982dd0f59dd-kube-api-access-8fkz8\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.814666 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.866321 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.015465 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rjhhj" event={"ID":"41fa0969-44fb-4cf4-916c-da0dd393a58c","Type":"ContainerStarted","Data":"eed67e51a9408ec961702fc368dbf4a559930f33c336cc7a646315d69ef4e78b"} Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.016846 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" event={"ID":"59adf6ef-2655-44f4-ae3d-91c315439598","Type":"ContainerStarted","Data":"f9b271005704877eca705e22367b2c57c420ed1fb5659025a900e0896db90b06"} Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.025928 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.037805 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:48 crc kubenswrapper[4656]: W0128 15:31:48.063099 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7c707ef_4b20_402f_9a10_d982dd0f59dd.slice/crio-a35ee7354f42f7415c87fe0e1d6ea521b2d1ffa71875c1c1f0dd87b545a9e96d WatchSource:0}: Error finding container a35ee7354f42f7415c87fe0e1d6ea521b2d1ffa71875c1c1f0dd87b545a9e96d: Status 404 returned error can't find the container with id a35ee7354f42f7415c87fe0e1d6ea521b2d1ffa71875c1c1f0dd87b545a9e96d Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.064408 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5567b6b6fc-hb2cx"] Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.118370 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x"] Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.252674 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.479582 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr"] Jan 28 15:31:48 crc kubenswrapper[4656]: W0128 15:31:48.486406 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc74f938e_5184_4ea3_afe5_373ef61c779a.slice/crio-1738a1ceb6cd6e4216e2eb6851a2389489a7b6a9317e2b6f5dab57cf12d5a756 WatchSource:0}: Error finding container 1738a1ceb6cd6e4216e2eb6851a2389489a7b6a9317e2b6f5dab57cf12d5a756: Status 404 returned error can't find the container with id 1738a1ceb6cd6e4216e2eb6851a2389489a7b6a9317e2b6f5dab57cf12d5a756 Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.087337 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" event={"ID":"0b2d8d4a-d2ba-4c29-b545-f23070527595","Type":"ContainerStarted","Data":"41632c767cea4fd4b10bc5a7c8b852338afedba5f28015b6985fdbe94d1072e4"} Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.093624 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5567b6b6fc-hb2cx" event={"ID":"b7c707ef-4b20-402f-9a10-d982dd0f59dd","Type":"ContainerStarted","Data":"27cd4e3de32b971b6b9feeb29c3013eebdef15c0b0081aeb3f894522ea0c9bf7"} Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.093707 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5567b6b6fc-hb2cx" event={"ID":"b7c707ef-4b20-402f-9a10-d982dd0f59dd","Type":"ContainerStarted","Data":"a35ee7354f42f7415c87fe0e1d6ea521b2d1ffa71875c1c1f0dd87b545a9e96d"} Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.095667 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" event={"ID":"c74f938e-5184-4ea3-afe5-373ef61c779a","Type":"ContainerStarted","Data":"1738a1ceb6cd6e4216e2eb6851a2389489a7b6a9317e2b6f5dab57cf12d5a756"} Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.131869 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5567b6b6fc-hb2cx" podStartSLOduration=2.131844092 podStartE2EDuration="2.131844092s" podCreationTimestamp="2026-01-28 15:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:31:49.121422183 +0000 UTC m=+799.629592997" watchObservedRunningTime="2026-01-28 15:31:49.131844092 +0000 UTC m=+799.640014896" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.150289 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" event={"ID":"0b2d8d4a-d2ba-4c29-b545-f23070527595","Type":"ContainerStarted","Data":"974bf24ad54875ac334c30e845e0d22e05bda2e69b7b25fb3df9867da0612917"} Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.150991 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.154800 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" event={"ID":"59adf6ef-2655-44f4-ae3d-91c315439598","Type":"ContainerStarted","Data":"e84e9ab9b83c016813a6f889eeca8cc888bacc714560990232a8892e4afeebf8"} Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.156400 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" event={"ID":"c74f938e-5184-4ea3-afe5-373ef61c779a","Type":"ContainerStarted","Data":"3df4a821d56cbb30c99bbd635c36b1b308aa7a7a6914c34862dc31778c14d3e3"} Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.158540 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rjhhj" event={"ID":"41fa0969-44fb-4cf4-916c-da0dd393a58c","Type":"ContainerStarted","Data":"e8961c37a6a911378d40ea3b26cad907348f7f0b2bb60c3acf40ab5d5bb7b1dc"} Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.158749 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.174901 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" podStartSLOduration=3.075339869 podStartE2EDuration="8.174869157s" podCreationTimestamp="2026-01-28 15:31:46 +0000 UTC" firstStartedPulling="2026-01-28 15:31:48.168748516 +0000 UTC m=+798.676919330" lastFinishedPulling="2026-01-28 15:31:53.268277804 +0000 UTC m=+803.776448618" observedRunningTime="2026-01-28 15:31:54.171789408 +0000 UTC m=+804.679960212" watchObservedRunningTime="2026-01-28 15:31:54.174869157 +0000 UTC m=+804.683039961" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.199284 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-rjhhj" podStartSLOduration=2.56192045 podStartE2EDuration="8.199237167s" podCreationTimestamp="2026-01-28 15:31:46 +0000 UTC" firstStartedPulling="2026-01-28 15:31:47.666125178 +0000 UTC m=+798.174295982" lastFinishedPulling="2026-01-28 15:31:53.303441895 +0000 UTC m=+803.811612699" observedRunningTime="2026-01-28 15:31:54.194743278 +0000 UTC m=+804.702914082" watchObservedRunningTime="2026-01-28 15:31:54.199237167 +0000 UTC m=+804.707407971" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.227377 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" podStartSLOduration=2.447094355 podStartE2EDuration="7.227065127s" podCreationTimestamp="2026-01-28 15:31:47 +0000 UTC" firstStartedPulling="2026-01-28 15:31:48.488026704 +0000 UTC m=+798.996197508" lastFinishedPulling="2026-01-28 15:31:53.267997476 +0000 UTC m=+803.776168280" observedRunningTime="2026-01-28 15:31:54.221668902 +0000 UTC m=+804.729839716" watchObservedRunningTime="2026-01-28 15:31:54.227065127 +0000 UTC m=+804.735235961" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.664518 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.708895 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.901778 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:56 crc kubenswrapper[4656]: I0128 15:31:56.176553 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v8b7s" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" containerID="cri-o://678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2" gracePeriod=2 Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.190258 4656 generic.go:334] "Generic (PLEG): container finished" podID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerID="678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2" exitCode=0 Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.190311 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerDied","Data":"678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2"} Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.815226 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.815293 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.821026 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.897546 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.072845 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") pod \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.072953 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") pod \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.073120 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") pod \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.074550 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities" (OuterVolumeSpecName: "utilities") pod "27f0ce7c-11c5-4334-b92c-ddae4644eafd" (UID: "27f0ce7c-11c5-4334-b92c-ddae4644eafd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.080069 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf" (OuterVolumeSpecName: "kube-api-access-v27mf") pod "27f0ce7c-11c5-4334-b92c-ddae4644eafd" (UID: "27f0ce7c-11c5-4334-b92c-ddae4644eafd"). InnerVolumeSpecName "kube-api-access-v27mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.174366 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.174398 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.198430 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerDied","Data":"eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5"} Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.198495 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.198520 4656 scope.go:117] "RemoveContainer" containerID="678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.200549 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" event={"ID":"59adf6ef-2655-44f4-ae3d-91c315439598","Type":"ContainerStarted","Data":"ed75f2451fe9a22f75612d489fe154ff9974b2e69f4ebbdff65167e538418c65"} Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.204344 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.221147 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27f0ce7c-11c5-4334-b92c-ddae4644eafd" (UID: "27f0ce7c-11c5-4334-b92c-ddae4644eafd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.221469 4656 scope.go:117] "RemoveContainer" containerID="dd449c48f558ae08f3c1e6725aa8ce1a5913a7b3d5c5a25666c90cd1deaf89cc" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.265448 4656 scope.go:117] "RemoveContainer" containerID="1c7cf96542fe516bc05bc2a374b423dd9244a24b1fc85dce4401caa1a519df5e" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.278435 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.291655 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.532262 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.536368 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:59 crc kubenswrapper[4656]: I0128 15:31:59.177633 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" path="/var/lib/kubelet/pods/27f0ce7c-11c5-4334-b92c-ddae4644eafd/volumes" Jan 28 15:31:59 crc kubenswrapper[4656]: I0128 15:31:59.222996 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" podStartSLOduration=2.959040195 podStartE2EDuration="13.222958315s" podCreationTimestamp="2026-01-28 15:31:46 +0000 UTC" firstStartedPulling="2026-01-28 15:31:47.647385749 +0000 UTC m=+798.155556563" lastFinishedPulling="2026-01-28 15:31:57.911303879 +0000 UTC m=+808.419474683" observedRunningTime="2026-01-28 15:31:59.222324427 +0000 UTC m=+809.730495291" watchObservedRunningTime="2026-01-28 15:31:59.222958315 +0000 UTC m=+809.731129159" Jan 28 15:32:02 crc kubenswrapper[4656]: I0128 15:32:02.644262 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:32:07 crc kubenswrapper[4656]: I0128 15:32:07.874067 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:32:11 crc kubenswrapper[4656]: I0128 15:32:11.263754 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:32:11 crc kubenswrapper[4656]: I0128 15:32:11.264071 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.904577 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d"] Jan 28 15:32:20 crc kubenswrapper[4656]: E0128 15:32:20.905408 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="extract-content" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.905428 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="extract-content" Jan 28 15:32:20 crc kubenswrapper[4656]: E0128 15:32:20.905452 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="extract-utilities" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.905460 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="extract-utilities" Jan 28 15:32:20 crc kubenswrapper[4656]: E0128 15:32:20.905478 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.905486 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.905657 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.906898 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.911426 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.922715 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d"] Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.068851 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.069238 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.069398 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.170297 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.170351 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.170431 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.170975 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.171087 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.190523 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.223864 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.814201 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d"] Jan 28 15:32:22 crc kubenswrapper[4656]: I0128 15:32:22.348528 4656 generic.go:334] "Generic (PLEG): container finished" podID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerID="8fc9f26efd0e5690cf8d68d433b9bd3eed36dbfd3955ebb0b4c98ddbda8996bb" exitCode=0 Jan 28 15:32:22 crc kubenswrapper[4656]: I0128 15:32:22.348681 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerDied","Data":"8fc9f26efd0e5690cf8d68d433b9bd3eed36dbfd3955ebb0b4c98ddbda8996bb"} Jan 28 15:32:22 crc kubenswrapper[4656]: I0128 15:32:22.348867 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerStarted","Data":"04a156875da8dc6d4d7efec6722544b9e8f5bd11a721782c1b47bdebab3a0813"} Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.339410 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-jrkdc" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" containerID="cri-o://6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" gracePeriod=15 Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.701506 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jrkdc_acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f/console/0.log" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.701858 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904490 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904600 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904639 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904683 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904723 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904750 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904806 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.905763 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.905777 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.906034 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca" (OuterVolumeSpecName: "service-ca") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.906064 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config" (OuterVolumeSpecName: "console-config") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.911278 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.911385 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb" (OuterVolumeSpecName: "kube-api-access-jrjsb") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "kube-api-access-jrjsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.916254 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006273 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006329 4656 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006404 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006419 4656 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006430 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006445 4656 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006455 4656 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.364948 4656 generic.go:334] "Generic (PLEG): container finished" podID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerID="48f4a658635ea51f802fd36a517cb4763602c51a7d8cf54a8212b9ed252b1e88" exitCode=0 Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.365026 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerDied","Data":"48f4a658635ea51f802fd36a517cb4763602c51a7d8cf54a8212b9ed252b1e88"} Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.370040 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jrkdc_acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f/console/0.log" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.370293 4656 generic.go:334] "Generic (PLEG): container finished" podID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerID="6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" exitCode=2 Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.370484 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jrkdc" event={"ID":"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f","Type":"ContainerDied","Data":"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8"} Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.370908 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.372110 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jrkdc" event={"ID":"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f","Type":"ContainerDied","Data":"c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017"} Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.372197 4656 scope.go:117] "RemoveContainer" containerID="6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.411361 4656 scope.go:117] "RemoveContainer" containerID="6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" Jan 28 15:32:24 crc kubenswrapper[4656]: E0128 15:32:24.413201 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8\": container with ID starting with 6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8 not found: ID does not exist" containerID="6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.413372 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8"} err="failed to get container status \"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8\": rpc error: code = NotFound desc = could not find container \"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8\": container with ID starting with 6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8 not found: ID does not exist" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.446615 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.454231 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:32:25 crc kubenswrapper[4656]: I0128 15:32:25.178398 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" path="/var/lib/kubelet/pods/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f/volumes" Jan 28 15:32:25 crc kubenswrapper[4656]: I0128 15:32:25.378878 4656 generic.go:334] "Generic (PLEG): container finished" podID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerID="ea7efe11676daa7d911cdd649195885da2ae75e7f58d2e0c4b126f7cf35c0839" exitCode=0 Jan 28 15:32:25 crc kubenswrapper[4656]: I0128 15:32:25.378936 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerDied","Data":"ea7efe11676daa7d911cdd649195885da2ae75e7f58d2e0c4b126f7cf35c0839"} Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.572460 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.740484 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") pod \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.740612 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") pod \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.740663 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") pod \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.741989 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle" (OuterVolumeSpecName: "bundle") pod "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" (UID: "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.747232 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v" (OuterVolumeSpecName: "kube-api-access-28z9v") pod "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" (UID: "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f"). InnerVolumeSpecName "kube-api-access-28z9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.756128 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util" (OuterVolumeSpecName: "util") pod "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" (UID: "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.842698 4656 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.842757 4656 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.842770 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:27 crc kubenswrapper[4656]: I0128 15:32:27.393638 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerDied","Data":"04a156875da8dc6d4d7efec6722544b9e8f5bd11a721782c1b47bdebab3a0813"} Jan 28 15:32:27 crc kubenswrapper[4656]: I0128 15:32:27.393690 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:27 crc kubenswrapper[4656]: I0128 15:32:27.393695 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a156875da8dc6d4d7efec6722544b9e8f5bd11a721782c1b47bdebab3a0813" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.871036 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk"] Jan 28 15:32:35 crc kubenswrapper[4656]: E0128 15:32:35.871976 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.871998 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" Jan 28 15:32:35 crc kubenswrapper[4656]: E0128 15:32:35.872028 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="pull" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872036 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="pull" Jan 28 15:32:35 crc kubenswrapper[4656]: E0128 15:32:35.872054 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="extract" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872063 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="extract" Jan 28 15:32:35 crc kubenswrapper[4656]: E0128 15:32:35.872074 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="util" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872088 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="util" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872261 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="extract" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872295 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872864 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877561 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-apiservice-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877632 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pmr\" (UniqueName: \"kubernetes.io/projected/93bc850d-d691-43b6-8668-79f21bd350a7-kube-api-access-n9pmr\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877614 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-sjnks" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877734 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-webhook-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877955 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.878333 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.878600 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.883021 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.906387 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk"] Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.978834 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pmr\" (UniqueName: \"kubernetes.io/projected/93bc850d-d691-43b6-8668-79f21bd350a7-kube-api-access-n9pmr\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.979145 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-apiservice-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.979196 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-webhook-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.991148 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-webhook-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.992858 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-apiservice-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.023689 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pmr\" (UniqueName: \"kubernetes.io/projected/93bc850d-d691-43b6-8668-79f21bd350a7-kube-api-access-n9pmr\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.219502 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf"] Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.220303 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.222497 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.222694 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.222929 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wjfks" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.238456 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf"] Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.264714 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.289188 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-webhook-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.289709 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-apiservice-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.289735 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tz5\" (UniqueName: \"kubernetes.io/projected/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-kube-api-access-66tz5\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.390537 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-webhook-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.390634 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tz5\" (UniqueName: \"kubernetes.io/projected/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-kube-api-access-66tz5\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.390659 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-apiservice-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.396327 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-webhook-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.396557 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-apiservice-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.417099 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tz5\" (UniqueName: \"kubernetes.io/projected/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-kube-api-access-66tz5\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.534872 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.959582 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk"] Jan 28 15:32:37 crc kubenswrapper[4656]: I0128 15:32:37.022965 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf"] Jan 28 15:32:37 crc kubenswrapper[4656]: I0128 15:32:37.464408 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" event={"ID":"93bc850d-d691-43b6-8668-79f21bd350a7","Type":"ContainerStarted","Data":"f2cb169b066cbe2b299e58230534327fe8069c2932807d646939a80cdbd29711"} Jan 28 15:32:37 crc kubenswrapper[4656]: I0128 15:32:37.465599 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" event={"ID":"585f1e9a-4070-4b23-bbab-f29ae7e95cf0","Type":"ContainerStarted","Data":"cbdc95b54efd7551c4b7ae1958c6954302acf696c7334e5d96b0d2dd380bce4b"} Jan 28 15:32:41 crc kubenswrapper[4656]: I0128 15:32:41.266421 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:32:41 crc kubenswrapper[4656]: I0128 15:32:41.266897 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.512093 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" event={"ID":"585f1e9a-4070-4b23-bbab-f29ae7e95cf0","Type":"ContainerStarted","Data":"4e788ec25cbde59f95b43247b7e7916e313a289e710f56b496571ba1e9e211a3"} Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.512487 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.514394 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" event={"ID":"93bc850d-d691-43b6-8668-79f21bd350a7","Type":"ContainerStarted","Data":"af4b6cb74f09e3294b44bc71e97e4cc4b094427f814f54529716e6a5730df9c7"} Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.514696 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.538137 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" podStartSLOduration=1.35362147 podStartE2EDuration="7.538115067s" podCreationTimestamp="2026-01-28 15:32:36 +0000 UTC" firstStartedPulling="2026-01-28 15:32:37.035463274 +0000 UTC m=+847.543634088" lastFinishedPulling="2026-01-28 15:32:43.219956871 +0000 UTC m=+853.728127685" observedRunningTime="2026-01-28 15:32:43.532911468 +0000 UTC m=+854.041082292" watchObservedRunningTime="2026-01-28 15:32:43.538115067 +0000 UTC m=+854.046285891" Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.554139 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" podStartSLOduration=2.350229032 podStartE2EDuration="8.554115737s" podCreationTimestamp="2026-01-28 15:32:35 +0000 UTC" firstStartedPulling="2026-01-28 15:32:36.999095668 +0000 UTC m=+847.507266472" lastFinishedPulling="2026-01-28 15:32:43.202982373 +0000 UTC m=+853.711153177" observedRunningTime="2026-01-28 15:32:43.551864263 +0000 UTC m=+854.060035087" watchObservedRunningTime="2026-01-28 15:32:43.554115737 +0000 UTC m=+854.062286541" Jan 28 15:32:56 crc kubenswrapper[4656]: I0128 15:32:56.539972 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.265599 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.266323 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.266416 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.267145 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.267243 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0" gracePeriod=600 Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.683029 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0" exitCode=0 Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.683088 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0"} Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.683431 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f"} Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.683494 4656 scope.go:117] "RemoveContainer" containerID="c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1" Jan 28 15:33:16 crc kubenswrapper[4656]: I0128 15:33:16.267566 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.004563 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z94g8"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.007525 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.016193 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-t7bxm" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.017010 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.020910 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.024255 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.025004 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.026918 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.058906 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113078 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkdx\" (UniqueName: \"kubernetes.io/projected/e31100cc-2c8a-4682-b6fb-acc4157f7d43-kube-api-access-mzkdx\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113136 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113686 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-conf\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113748 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-reloader\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113786 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics-certs\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.114024 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-startup\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.114228 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-sockets\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.142528 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-k4qr2"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.143479 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.145182 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.146484 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.151248 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.153059 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dm6w7" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.164958 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-xgjbl"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.165908 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.171133 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.190029 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xgjbl"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215537 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzkdx\" (UniqueName: \"kubernetes.io/projected/e31100cc-2c8a-4682-b6fb-acc4157f7d43-kube-api-access-mzkdx\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215583 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215612 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-cert\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215639 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/555938b5-7504-41e4-9331-7be899491299-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215663 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metallb-excludel2\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215690 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215709 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-conf\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215729 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-reloader\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215752 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215788 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics-certs\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215819 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw2c7\" (UniqueName: \"kubernetes.io/projected/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-kube-api-access-mw2c7\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215854 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jpd2\" (UniqueName: \"kubernetes.io/projected/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-kube-api-access-8jpd2\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215876 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-startup\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215906 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q72k\" (UniqueName: \"kubernetes.io/projected/555938b5-7504-41e4-9331-7be899491299-kube-api-access-9q72k\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215964 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-sockets\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215987 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.216776 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.217040 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-conf\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.221468 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-reloader\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.222863 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-sockets\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.223061 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-startup\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.240856 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics-certs\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.248810 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzkdx\" (UniqueName: \"kubernetes.io/projected/e31100cc-2c8a-4682-b6fb-acc4157f7d43-kube-api-access-mzkdx\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316316 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q72k\" (UniqueName: \"kubernetes.io/projected/555938b5-7504-41e4-9331-7be899491299-kube-api-access-9q72k\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316370 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316411 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-cert\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316428 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/555938b5-7504-41e4-9331-7be899491299-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316454 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metallb-excludel2\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316474 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316503 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316526 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw2c7\" (UniqueName: \"kubernetes.io/projected/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-kube-api-access-mw2c7\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316549 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jpd2\" (UniqueName: \"kubernetes.io/projected/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-kube-api-access-8jpd2\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.316663 4656 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.316777 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist podName:8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:17.816734851 +0000 UTC m=+888.324905655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist") pod "speaker-k4qr2" (UID: "8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3") : secret "metallb-memberlist" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.316949 4656 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.317008 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs podName:8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:17.816985208 +0000 UTC m=+888.325156022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs") pod "speaker-k4qr2" (UID: "8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3") : secret "speaker-certs-secret" not found Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.317254 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metallb-excludel2\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.317449 4656 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.317510 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs podName:a0c56151-b07f-4c02-9b4c-0b48c4dd8a03 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:17.817489673 +0000 UTC m=+888.325660567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs") pod "controller-6968d8fdc4-xgjbl" (UID: "a0c56151-b07f-4c02-9b4c-0b48c4dd8a03") : secret "controller-certs-secret" not found Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.319824 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-cert\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.324189 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.329722 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/555938b5-7504-41e4-9331-7be899491299-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.337647 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw2c7\" (UniqueName: \"kubernetes.io/projected/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-kube-api-access-mw2c7\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.338137 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q72k\" (UniqueName: \"kubernetes.io/projected/555938b5-7504-41e4-9331-7be899491299-kube-api-access-9q72k\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.338803 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.349793 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jpd2\" (UniqueName: \"kubernetes.io/projected/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-kube-api-access-8jpd2\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.721851 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"6f68746b3d9f4d682982276809a2c870c5d38fbb0d71ec0397a4adbbc8a1f3a9"} Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.821966 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.822016 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.822062 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.822335 4656 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.822435 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist podName:8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:18.822411067 +0000 UTC m=+889.330581871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist") pod "speaker-k4qr2" (UID: "8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3") : secret "metallb-memberlist" not found Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.823454 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq"] Jan 28 15:33:17 crc kubenswrapper[4656]: W0128 15:33:17.826538 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555938b5_7504_41e4_9331_7be899491299.slice/crio-4a68f55507fac9296298d244fe285dcf21f01581d58854197c428e602c4159a4 WatchSource:0}: Error finding container 4a68f55507fac9296298d244fe285dcf21f01581d58854197c428e602c4159a4: Status 404 returned error can't find the container with id 4a68f55507fac9296298d244fe285dcf21f01581d58854197c428e602c4159a4 Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.827496 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.829622 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.083225 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.399667 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xgjbl"] Jan 28 15:33:18 crc kubenswrapper[4656]: W0128 15:33:18.421565 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c56151_b07f_4c02_9b4c_0b48c4dd8a03.slice/crio-80946ce0751326de500df6bead81877dda7103177517d0515992ccc75d986a34 WatchSource:0}: Error finding container 80946ce0751326de500df6bead81877dda7103177517d0515992ccc75d986a34: Status 404 returned error can't find the container with id 80946ce0751326de500df6bead81877dda7103177517d0515992ccc75d986a34 Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.730025 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xgjbl" event={"ID":"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03","Type":"ContainerStarted","Data":"340f8d38554e114e7a7a00e32c5fe17afb2c5c48a7b9d6800224e0c2c45659b2"} Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.730082 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xgjbl" event={"ID":"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03","Type":"ContainerStarted","Data":"80946ce0751326de500df6bead81877dda7103177517d0515992ccc75d986a34"} Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.732014 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" event={"ID":"555938b5-7504-41e4-9331-7be899491299","Type":"ContainerStarted","Data":"4a68f55507fac9296298d244fe285dcf21f01581d58854197c428e602c4159a4"} Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.833121 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.840878 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.957644 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k4qr2" Jan 28 15:33:18 crc kubenswrapper[4656]: W0128 15:33:18.998443 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ffb0cda_bbe7_41e1_acdb_fd11fd2e33a3.slice/crio-11ec395aaa4435a9365d4a82acf6e69d481f9f73f7abc6b8cb98a17d61b467e8 WatchSource:0}: Error finding container 11ec395aaa4435a9365d4a82acf6e69d481f9f73f7abc6b8cb98a17d61b467e8: Status 404 returned error can't find the container with id 11ec395aaa4435a9365d4a82acf6e69d481f9f73f7abc6b8cb98a17d61b467e8 Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.746853 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xgjbl" event={"ID":"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03","Type":"ContainerStarted","Data":"39153442b29f1725a200a9d891027834262f56487c54efcb36c224a6f09c2d89"} Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.747422 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.755433 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k4qr2" event={"ID":"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3","Type":"ContainerStarted","Data":"4b62abfac82e167105d32e08a464d055b30fd488f25c8892e522441ace3728f0"} Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.755473 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k4qr2" event={"ID":"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3","Type":"ContainerStarted","Data":"2f3b90df056f8691c3a39b27f294c2fab0f582186cc819bda7de16d24bc0d72f"} Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.755483 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k4qr2" event={"ID":"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3","Type":"ContainerStarted","Data":"11ec395aaa4435a9365d4a82acf6e69d481f9f73f7abc6b8cb98a17d61b467e8"} Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.755912 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-k4qr2" Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.826639 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-xgjbl" podStartSLOduration=2.826616583 podStartE2EDuration="2.826616583s" podCreationTimestamp="2026-01-28 15:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:19.793647316 +0000 UTC m=+890.301818120" watchObservedRunningTime="2026-01-28 15:33:19.826616583 +0000 UTC m=+890.334787387" Jan 28 15:33:21 crc kubenswrapper[4656]: I0128 15:33:21.195325 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-k4qr2" podStartSLOduration=4.195303349 podStartE2EDuration="4.195303349s" podCreationTimestamp="2026-01-28 15:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:19.825004747 +0000 UTC m=+890.333175571" watchObservedRunningTime="2026-01-28 15:33:21.195303349 +0000 UTC m=+891.703474153" Jan 28 15:33:28 crc kubenswrapper[4656]: I0128 15:33:28.088908 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.766964 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.769536 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.792243 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.840888 4656 generic.go:334] "Generic (PLEG): container finished" podID="e31100cc-2c8a-4682-b6fb-acc4157f7d43" containerID="8d01c29fd96b6a245f92a36e09da008a30e90cba3803ead32b6a1c660cf0ea11" exitCode=0 Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.840991 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerDied","Data":"8d01c29fd96b6a245f92a36e09da008a30e90cba3803ead32b6a1c660cf0ea11"} Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.843208 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" event={"ID":"555938b5-7504-41e4-9331-7be899491299","Type":"ContainerStarted","Data":"488c5139fcb911039a92150deb064cf9e317ad744e75620626c206f313505bfc"} Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.843355 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.896393 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.896503 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.896584 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.946655 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" podStartSLOduration=2.9299799479999997 podStartE2EDuration="14.946623914s" podCreationTimestamp="2026-01-28 15:33:16 +0000 UTC" firstStartedPulling="2026-01-28 15:33:17.828963286 +0000 UTC m=+888.337134090" lastFinishedPulling="2026-01-28 15:33:29.845607252 +0000 UTC m=+900.353778056" observedRunningTime="2026-01-28 15:33:30.943965887 +0000 UTC m=+901.452136691" watchObservedRunningTime="2026-01-28 15:33:30.946623914 +0000 UTC m=+901.454794718" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.998138 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.998269 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.998327 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.999381 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.000363 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.022311 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.083965 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.666704 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:31 crc kubenswrapper[4656]: W0128 15:33:31.671859 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaea8db27_38e0_4ac8_8835_b9d701c8f230.slice/crio-c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0 WatchSource:0}: Error finding container c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0: Status 404 returned error can't find the container with id c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0 Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.852466 4656 generic.go:334] "Generic (PLEG): container finished" podID="e31100cc-2c8a-4682-b6fb-acc4157f7d43" containerID="bec3cc61215aa6c2b131a7f50243ebbd5b165c44cf464a132df799820604071a" exitCode=0 Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.854110 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerDied","Data":"bec3cc61215aa6c2b131a7f50243ebbd5b165c44cf464a132df799820604071a"} Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.856055 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerStarted","Data":"c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0"} Jan 28 15:33:32 crc kubenswrapper[4656]: I0128 15:33:32.862683 4656 generic.go:334] "Generic (PLEG): container finished" podID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerID="fe8b322c739039fd04ab424c28e05ffbfdd4af0e21a26d5ce1fb07d9670baa75" exitCode=0 Jan 28 15:33:32 crc kubenswrapper[4656]: I0128 15:33:32.862772 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerDied","Data":"fe8b322c739039fd04ab424c28e05ffbfdd4af0e21a26d5ce1fb07d9670baa75"} Jan 28 15:33:32 crc kubenswrapper[4656]: I0128 15:33:32.866562 4656 generic.go:334] "Generic (PLEG): container finished" podID="e31100cc-2c8a-4682-b6fb-acc4157f7d43" containerID="705580916d7d736910f35c909b64a4ea500c151dcc81e7af94f7dc6ebb96efa7" exitCode=0 Jan 28 15:33:32 crc kubenswrapper[4656]: I0128 15:33:32.866598 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerDied","Data":"705580916d7d736910f35c909b64a4ea500c151dcc81e7af94f7dc6ebb96efa7"} Jan 28 15:33:36 crc kubenswrapper[4656]: I0128 15:33:36.894914 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"988a794031b8f6e722d0f71bc04ac75b3bb4a225cca7ba1c33dd15fb9f327082"} Jan 28 15:33:36 crc kubenswrapper[4656]: I0128 15:33:36.895570 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"2f63e67d64f51292396382dbd358736caf1f8d965cb05da30c4b6427a143c269"} Jan 28 15:33:37 crc kubenswrapper[4656]: I0128 15:33:37.917960 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"2301a34435ae57f2c956f3a22e00ab2f916f51af1fae0ad0025e198d9bab9ce1"} Jan 28 15:33:37 crc kubenswrapper[4656]: I0128 15:33:37.919256 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"c91c43961ecf120b0ae37bdf11bf68ae5d224cdda8ace05a93bf5b1048100296"} Jan 28 15:33:37 crc kubenswrapper[4656]: I0128 15:33:37.921282 4656 generic.go:334] "Generic (PLEG): container finished" podID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerID="9c21dc5b63458bfdf67af2165e2644c35c783d3c4eb17c296ab48b7688983ee9" exitCode=0 Jan 28 15:33:37 crc kubenswrapper[4656]: I0128 15:33:37.921309 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerDied","Data":"9c21dc5b63458bfdf67af2165e2644c35c783d3c4eb17c296ab48b7688983ee9"} Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.930844 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"672785aaeb3dcfc1a7044d7ae574ee59501905f8f0ccf92b43011e15c73fb6f5"} Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.931101 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"e82ca68b3f1f13b5f5d16e11b29f1cd3f813aa8af6403ffcc52565d15adbcbdf"} Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.931705 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.961549 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-k4qr2" Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.969989 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z94g8" podStartSLOduration=10.586425115 podStartE2EDuration="22.969964108s" podCreationTimestamp="2026-01-28 15:33:16 +0000 UTC" firstStartedPulling="2026-01-28 15:33:17.496490909 +0000 UTC m=+888.004661723" lastFinishedPulling="2026-01-28 15:33:29.880029912 +0000 UTC m=+900.388200716" observedRunningTime="2026-01-28 15:33:38.964801619 +0000 UTC m=+909.472972433" watchObservedRunningTime="2026-01-28 15:33:38.969964108 +0000 UTC m=+909.478134912" Jan 28 15:33:39 crc kubenswrapper[4656]: I0128 15:33:39.941700 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerStarted","Data":"7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5"} Jan 28 15:33:39 crc kubenswrapper[4656]: I0128 15:33:39.970041 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d6s8p" podStartSLOduration=3.930896722 podStartE2EDuration="9.970015561s" podCreationTimestamp="2026-01-28 15:33:30 +0000 UTC" firstStartedPulling="2026-01-28 15:33:32.864660051 +0000 UTC m=+903.372830855" lastFinishedPulling="2026-01-28 15:33:38.90377889 +0000 UTC m=+909.411949694" observedRunningTime="2026-01-28 15:33:39.963704959 +0000 UTC m=+910.471875763" watchObservedRunningTime="2026-01-28 15:33:39.970015561 +0000 UTC m=+910.478186365" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.084844 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.085376 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.879724 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.881011 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.884371 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.884546 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9nnjn" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.897644 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.914336 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.079980 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") pod \"openstack-operator-index-hc9z5\" (UID: \"586a9839-5486-44ed-bccd-7219927f1582\") " pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.181414 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") pod \"openstack-operator-index-hc9z5\" (UID: \"586a9839-5486-44ed-bccd-7219927f1582\") " pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.196800 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d6s8p" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" probeResult="failure" output=< Jan 28 15:33:42 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:33:42 crc kubenswrapper[4656]: > Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.212930 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") pod \"openstack-operator-index-hc9z5\" (UID: \"586a9839-5486-44ed-bccd-7219927f1582\") " pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.324971 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.378929 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.501006 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.802209 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.979896 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hc9z5" event={"ID":"586a9839-5486-44ed-bccd-7219927f1582","Type":"ContainerStarted","Data":"1d55a2c91351df6542144ca7b0d478b0ef9b7d981b9e83ff9b7d814420f07605"} Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.243340 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.859344 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tknhq"] Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.863046 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.871851 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tknhq"] Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.994027 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfs75\" (UniqueName: \"kubernetes.io/projected/c65965be-4267-4c92-a9b1-046d85299b2c-kube-api-access-dfs75\") pod \"openstack-operator-index-tknhq\" (UID: \"c65965be-4267-4c92-a9b1-046d85299b2c\") " pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:46 crc kubenswrapper[4656]: I0128 15:33:46.095272 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfs75\" (UniqueName: \"kubernetes.io/projected/c65965be-4267-4c92-a9b1-046d85299b2c-kube-api-access-dfs75\") pod \"openstack-operator-index-tknhq\" (UID: \"c65965be-4267-4c92-a9b1-046d85299b2c\") " pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:46 crc kubenswrapper[4656]: I0128 15:33:46.120758 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfs75\" (UniqueName: \"kubernetes.io/projected/c65965be-4267-4c92-a9b1-046d85299b2c-kube-api-access-dfs75\") pod \"openstack-operator-index-tknhq\" (UID: \"c65965be-4267-4c92-a9b1-046d85299b2c\") " pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:46 crc kubenswrapper[4656]: I0128 15:33:46.202654 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:47 crc kubenswrapper[4656]: I0128 15:33:47.234746 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tknhq"] Jan 28 15:33:47 crc kubenswrapper[4656]: I0128 15:33:47.330384 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:47 crc kubenswrapper[4656]: I0128 15:33:47.369580 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:48 crc kubenswrapper[4656]: W0128 15:33:48.107419 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc65965be_4267_4c92_a9b1_046d85299b2c.slice/crio-9257c587405d96ebfdd173d424e3f50644adf64d1899d83653b4f2bc04bd0c30 WatchSource:0}: Error finding container 9257c587405d96ebfdd173d424e3f50644adf64d1899d83653b4f2bc04bd0c30: Status 404 returned error can't find the container with id 9257c587405d96ebfdd173d424e3f50644adf64d1899d83653b4f2bc04bd0c30 Jan 28 15:33:49 crc kubenswrapper[4656]: I0128 15:33:49.022146 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tknhq" event={"ID":"c65965be-4267-4c92-a9b1-046d85299b2c","Type":"ContainerStarted","Data":"9257c587405d96ebfdd173d424e3f50644adf64d1899d83653b4f2bc04bd0c30"} Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.030078 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tknhq" event={"ID":"c65965be-4267-4c92-a9b1-046d85299b2c","Type":"ContainerStarted","Data":"c4f5bb1cd01c83029fc1f496c11478e78ba6b1ba94a3b5cb9790a1faee4d6ddc"} Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.031533 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hc9z5" event={"ID":"586a9839-5486-44ed-bccd-7219927f1582","Type":"ContainerStarted","Data":"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09"} Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.031679 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hc9z5" podUID="586a9839-5486-44ed-bccd-7219927f1582" containerName="registry-server" containerID="cri-o://32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" gracePeriod=2 Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.074359 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tknhq" podStartSLOduration=4.27603481 podStartE2EDuration="5.074332009s" podCreationTimestamp="2026-01-28 15:33:45 +0000 UTC" firstStartedPulling="2026-01-28 15:33:48.109317893 +0000 UTC m=+918.617488707" lastFinishedPulling="2026-01-28 15:33:48.907615112 +0000 UTC m=+919.415785906" observedRunningTime="2026-01-28 15:33:50.04766199 +0000 UTC m=+920.555832794" watchObservedRunningTime="2026-01-28 15:33:50.074332009 +0000 UTC m=+920.582502813" Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.415770 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.557821 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") pod \"586a9839-5486-44ed-bccd-7219927f1582\" (UID: \"586a9839-5486-44ed-bccd-7219927f1582\") " Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.563930 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc" (OuterVolumeSpecName: "kube-api-access-mg7wc") pod "586a9839-5486-44ed-bccd-7219927f1582" (UID: "586a9839-5486-44ed-bccd-7219927f1582"). InnerVolumeSpecName "kube-api-access-mg7wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.658929 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.039839 4656 generic.go:334] "Generic (PLEG): container finished" podID="586a9839-5486-44ed-bccd-7219927f1582" containerID="32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" exitCode=0 Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.039921 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.039942 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hc9z5" event={"ID":"586a9839-5486-44ed-bccd-7219927f1582","Type":"ContainerDied","Data":"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09"} Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.040011 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hc9z5" event={"ID":"586a9839-5486-44ed-bccd-7219927f1582","Type":"ContainerDied","Data":"1d55a2c91351df6542144ca7b0d478b0ef9b7d981b9e83ff9b7d814420f07605"} Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.040058 4656 scope.go:117] "RemoveContainer" containerID="32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.063567 4656 scope.go:117] "RemoveContainer" containerID="32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" Jan 28 15:33:51 crc kubenswrapper[4656]: E0128 15:33:51.065620 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09\": container with ID starting with 32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09 not found: ID does not exist" containerID="32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.065665 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09"} err="failed to get container status \"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09\": rpc error: code = NotFound desc = could not find container \"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09\": container with ID starting with 32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09 not found: ID does not exist" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.072058 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.076899 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.127432 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.177815 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586a9839-5486-44ed-bccd-7219927f1582" path="/var/lib/kubelet/pods/586a9839-5486-44ed-bccd-7219927f1582/volumes" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.178372 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:54 crc kubenswrapper[4656]: I0128 15:33:54.241900 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:54 crc kubenswrapper[4656]: I0128 15:33:54.242596 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d6s8p" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" containerID="cri-o://7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5" gracePeriod=2 Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.097093 4656 generic.go:334] "Generic (PLEG): container finished" podID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerID="7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5" exitCode=0 Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.097411 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerDied","Data":"7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5"} Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.401411 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.496727 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") pod \"aea8db27-38e0-4ac8-8835-b9d701c8f230\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.496865 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") pod \"aea8db27-38e0-4ac8-8835-b9d701c8f230\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.496899 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") pod \"aea8db27-38e0-4ac8-8835-b9d701c8f230\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.497981 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities" (OuterVolumeSpecName: "utilities") pod "aea8db27-38e0-4ac8-8835-b9d701c8f230" (UID: "aea8db27-38e0-4ac8-8835-b9d701c8f230"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.505269 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7" (OuterVolumeSpecName: "kube-api-access-b6tz7") pod "aea8db27-38e0-4ac8-8835-b9d701c8f230" (UID: "aea8db27-38e0-4ac8-8835-b9d701c8f230"). InnerVolumeSpecName "kube-api-access-b6tz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.550822 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aea8db27-38e0-4ac8-8835-b9d701c8f230" (UID: "aea8db27-38e0-4ac8-8835-b9d701c8f230"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.597837 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.597885 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.597898 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.106395 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerDied","Data":"c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0"} Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.106523 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.106537 4656 scope.go:117] "RemoveContainer" containerID="7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.124772 4656 scope.go:117] "RemoveContainer" containerID="9c21dc5b63458bfdf67af2165e2644c35c783d3c4eb17c296ab48b7688983ee9" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.140524 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.145464 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.150553 4656 scope.go:117] "RemoveContainer" containerID="fe8b322c739039fd04ab424c28e05ffbfdd4af0e21a26d5ce1fb07d9670baa75" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.203593 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.204005 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.234831 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:57 crc kubenswrapper[4656]: I0128 15:33:57.138918 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:57 crc kubenswrapper[4656]: I0128 15:33:57.181142 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" path="/var/lib/kubelet/pods/aea8db27-38e0-4ac8-8835-b9d701c8f230/volumes" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448057 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:33:59 crc kubenswrapper[4656]: E0128 15:33:59.448843 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="extract-content" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448864 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="extract-content" Jan 28 15:33:59 crc kubenswrapper[4656]: E0128 15:33:59.448886 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586a9839-5486-44ed-bccd-7219927f1582" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448893 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="586a9839-5486-44ed-bccd-7219927f1582" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: E0128 15:33:59.448905 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="extract-utilities" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448913 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="extract-utilities" Jan 28 15:33:59 crc kubenswrapper[4656]: E0128 15:33:59.448931 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448939 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.449133 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.449155 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="586a9839-5486-44ed-bccd-7219927f1582" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.450130 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.475294 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.478199 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.478296 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.478416 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.579539 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.579616 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.579678 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.580255 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.580330 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.615924 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.768714 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:00 crc kubenswrapper[4656]: I0128 15:34:00.065936 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:34:00 crc kubenswrapper[4656]: W0128 15:34:00.075728 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10cc0f8e_3b8e_4ce0_9141_fd48481ff622.slice/crio-3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd WatchSource:0}: Error finding container 3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd: Status 404 returned error can't find the container with id 3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd Jan 28 15:34:00 crc kubenswrapper[4656]: I0128 15:34:00.152415 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerStarted","Data":"3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd"} Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.161072 4656 generic.go:334] "Generic (PLEG): container finished" podID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerID="5645c2dd6f454c80b7bbeb869d6c9e929d591ed8e12d98c6d8681e2f82ba32bb" exitCode=0 Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.161434 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerDied","Data":"5645c2dd6f454c80b7bbeb869d6c9e929d591ed8e12d98c6d8681e2f82ba32bb"} Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.693377 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq"] Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.694579 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.707355 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq"] Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.707790 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bgc4w" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.808795 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.808926 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.808966 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.910754 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.910855 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.910910 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.911595 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.911603 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.929854 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:02 crc kubenswrapper[4656]: I0128 15:34:02.019281 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:02 crc kubenswrapper[4656]: I0128 15:34:02.167836 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerStarted","Data":"b9e878f73cd88ccd02aece43db5b72158f4dc09151f8f8581608bf10b91e8667"} Jan 28 15:34:02 crc kubenswrapper[4656]: I0128 15:34:02.340031 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq"] Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.175273 4656 generic.go:334] "Generic (PLEG): container finished" podID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerID="b9e878f73cd88ccd02aece43db5b72158f4dc09151f8f8581608bf10b91e8667" exitCode=0 Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.177584 4656 generic.go:334] "Generic (PLEG): container finished" podID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerID="2c9c6bff4cedaf5dd39afb3e2d6c64d674b622d996bc32d90c2a5a5c6e725e02" exitCode=0 Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.179721 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerDied","Data":"b9e878f73cd88ccd02aece43db5b72158f4dc09151f8f8581608bf10b91e8667"} Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.179752 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerDied","Data":"2c9c6bff4cedaf5dd39afb3e2d6c64d674b622d996bc32d90c2a5a5c6e725e02"} Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.179763 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerStarted","Data":"30e60863f34de76684074956da200a0e1a04f3f9c65027ce9ecc16a1f1c826a1"} Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.193462 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerStarted","Data":"37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8"} Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.197602 4656 generic.go:334] "Generic (PLEG): container finished" podID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerID="4c9dd1da445842fdfec59ae60f4a2121d08dac1104abf88f297c01649350b333" exitCode=0 Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.197641 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerDied","Data":"4c9dd1da445842fdfec59ae60f4a2121d08dac1104abf88f297c01649350b333"} Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.245965 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r7xrq" podStartSLOduration=2.7380914819999997 podStartE2EDuration="5.245939674s" podCreationTimestamp="2026-01-28 15:33:59 +0000 UTC" firstStartedPulling="2026-01-28 15:34:01.163589385 +0000 UTC m=+931.671760189" lastFinishedPulling="2026-01-28 15:34:03.671437577 +0000 UTC m=+934.179608381" observedRunningTime="2026-01-28 15:34:04.221974563 +0000 UTC m=+934.730145367" watchObservedRunningTime="2026-01-28 15:34:04.245939674 +0000 UTC m=+934.754110478" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.664773 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.666593 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.680349 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.858826 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.858958 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.858994 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.960184 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.960527 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.961085 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.960929 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.961475 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.984126 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.989665 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:05 crc kubenswrapper[4656]: I0128 15:34:05.216043 4656 generic.go:334] "Generic (PLEG): container finished" podID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerID="c2eef08f3534083715182ac4af984bbef82d830a11d3f94be9ddf07c8db2098f" exitCode=0 Jan 28 15:34:05 crc kubenswrapper[4656]: I0128 15:34:05.216197 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerDied","Data":"c2eef08f3534083715182ac4af984bbef82d830a11d3f94be9ddf07c8db2098f"} Jan 28 15:34:05 crc kubenswrapper[4656]: I0128 15:34:05.277700 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.226323 4656 generic.go:334] "Generic (PLEG): container finished" podID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerID="a38702ba1010780fceeb4e14ba15c9a6be53702dae80ef7db7486122c9680a4c" exitCode=0 Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.227445 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerDied","Data":"a38702ba1010780fceeb4e14ba15c9a6be53702dae80ef7db7486122c9680a4c"} Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.227471 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerStarted","Data":"0a1f1983ebbdfd9470548313b59bc006ed4c1ebef587436ac923240c5763c30c"} Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.465639 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.484226 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") pod \"a11dcdce-e6bc-48a2-b273-3755e5aee495\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.484304 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") pod \"a11dcdce-e6bc-48a2-b273-3755e5aee495\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.484355 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") pod \"a11dcdce-e6bc-48a2-b273-3755e5aee495\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.485057 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle" (OuterVolumeSpecName: "bundle") pod "a11dcdce-e6bc-48a2-b273-3755e5aee495" (UID: "a11dcdce-e6bc-48a2-b273-3755e5aee495"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.494750 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx" (OuterVolumeSpecName: "kube-api-access-k6fgx") pod "a11dcdce-e6bc-48a2-b273-3755e5aee495" (UID: "a11dcdce-e6bc-48a2-b273-3755e5aee495"). InnerVolumeSpecName "kube-api-access-k6fgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.573575 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util" (OuterVolumeSpecName: "util") pod "a11dcdce-e6bc-48a2-b273-3755e5aee495" (UID: "a11dcdce-e6bc-48a2-b273-3755e5aee495"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.593254 4656 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.593317 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.593330 4656 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:07 crc kubenswrapper[4656]: I0128 15:34:07.235644 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerDied","Data":"30e60863f34de76684074956da200a0e1a04f3f9c65027ce9ecc16a1f1c826a1"} Jan 28 15:34:07 crc kubenswrapper[4656]: I0128 15:34:07.235700 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:07 crc kubenswrapper[4656]: I0128 15:34:07.235915 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e60863f34de76684074956da200a0e1a04f3f9c65027ce9ecc16a1f1c826a1" Jan 28 15:34:08 crc kubenswrapper[4656]: I0128 15:34:08.244941 4656 generic.go:334] "Generic (PLEG): container finished" podID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerID="7ff58517afbd24556b20c54ce145108e3bbe303b3565c3ff9215649e73488d31" exitCode=0 Jan 28 15:34:08 crc kubenswrapper[4656]: I0128 15:34:08.244992 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerDied","Data":"7ff58517afbd24556b20c54ce145108e3bbe303b3565c3ff9215649e73488d31"} Jan 28 15:34:09 crc kubenswrapper[4656]: I0128 15:34:09.769203 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:09 crc kubenswrapper[4656]: I0128 15:34:09.769806 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:09 crc kubenswrapper[4656]: I0128 15:34:09.816535 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:10 crc kubenswrapper[4656]: I0128 15:34:10.260216 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerStarted","Data":"f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd"} Jan 28 15:34:10 crc kubenswrapper[4656]: I0128 15:34:10.315415 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:10 crc kubenswrapper[4656]: I0128 15:34:10.332463 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rjtm5" podStartSLOduration=3.133571562 podStartE2EDuration="6.332438941s" podCreationTimestamp="2026-01-28 15:34:04 +0000 UTC" firstStartedPulling="2026-01-28 15:34:06.229966518 +0000 UTC m=+936.738137332" lastFinishedPulling="2026-01-28 15:34:09.428833907 +0000 UTC m=+939.937004711" observedRunningTime="2026-01-28 15:34:10.282556343 +0000 UTC m=+940.790727137" watchObservedRunningTime="2026-01-28 15:34:10.332438941 +0000 UTC m=+940.840609745" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.570450 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb"] Jan 28 15:34:11 crc kubenswrapper[4656]: E0128 15:34:11.571131 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="util" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571148 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="util" Jan 28 15:34:11 crc kubenswrapper[4656]: E0128 15:34:11.571220 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="extract" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571230 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="extract" Jan 28 15:34:11 crc kubenswrapper[4656]: E0128 15:34:11.571251 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="pull" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571261 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="pull" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571412 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="extract" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571925 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.575389 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-n8d8s" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.609526 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb"] Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.759604 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77cx\" (UniqueName: \"kubernetes.io/projected/ab5fdcdc-7606-4e97-a65a-c98545c1a74a-kube-api-access-n77cx\") pod \"openstack-operator-controller-init-678d9cfb88-c5xvb\" (UID: \"ab5fdcdc-7606-4e97-a65a-c98545c1a74a\") " pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.860622 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n77cx\" (UniqueName: \"kubernetes.io/projected/ab5fdcdc-7606-4e97-a65a-c98545c1a74a-kube-api-access-n77cx\") pod \"openstack-operator-controller-init-678d9cfb88-c5xvb\" (UID: \"ab5fdcdc-7606-4e97-a65a-c98545c1a74a\") " pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.883097 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n77cx\" (UniqueName: \"kubernetes.io/projected/ab5fdcdc-7606-4e97-a65a-c98545c1a74a-kube-api-access-n77cx\") pod \"openstack-operator-controller-init-678d9cfb88-c5xvb\" (UID: \"ab5fdcdc-7606-4e97-a65a-c98545c1a74a\") " pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.891294 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:12 crc kubenswrapper[4656]: I0128 15:34:12.387663 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb"] Jan 28 15:34:12 crc kubenswrapper[4656]: W0128 15:34:12.392627 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab5fdcdc_7606_4e97_a65a_c98545c1a74a.slice/crio-99c29b134aa0cb127e32a07f02e2fea2f3641ed8f67a625e143fb99094196b02 WatchSource:0}: Error finding container 99c29b134aa0cb127e32a07f02e2fea2f3641ed8f67a625e143fb99094196b02: Status 404 returned error can't find the container with id 99c29b134aa0cb127e32a07f02e2fea2f3641ed8f67a625e143fb99094196b02 Jan 28 15:34:13 crc kubenswrapper[4656]: I0128 15:34:13.281020 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" event={"ID":"ab5fdcdc-7606-4e97-a65a-c98545c1a74a","Type":"ContainerStarted","Data":"99c29b134aa0cb127e32a07f02e2fea2f3641ed8f67a625e143fb99094196b02"} Jan 28 15:34:14 crc kubenswrapper[4656]: I0128 15:34:14.990370 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:14 crc kubenswrapper[4656]: I0128 15:34:14.990729 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:15 crc kubenswrapper[4656]: I0128 15:34:15.054979 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:15 crc kubenswrapper[4656]: I0128 15:34:15.340852 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:16 crc kubenswrapper[4656]: I0128 15:34:16.443468 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:34:16 crc kubenswrapper[4656]: I0128 15:34:16.443778 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r7xrq" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="registry-server" containerID="cri-o://37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8" gracePeriod=2 Jan 28 15:34:16 crc kubenswrapper[4656]: I0128 15:34:16.638200 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:17 crc kubenswrapper[4656]: I0128 15:34:17.306223 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rjtm5" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="registry-server" containerID="cri-o://f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd" gracePeriod=2 Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.317891 4656 generic.go:334] "Generic (PLEG): container finished" podID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerID="37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8" exitCode=0 Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.318072 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerDied","Data":"37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8"} Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.649969 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.761626 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") pod \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.761717 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") pod \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.761748 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") pod \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.780325 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn" (OuterVolumeSpecName: "kube-api-access-cfvnn") pod "10cc0f8e-3b8e-4ce0-9141-fd48481ff622" (UID: "10cc0f8e-3b8e-4ce0-9141-fd48481ff622"). InnerVolumeSpecName "kube-api-access-cfvnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.783283 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities" (OuterVolumeSpecName: "utilities") pod "10cc0f8e-3b8e-4ce0-9141-fd48481ff622" (UID: "10cc0f8e-3b8e-4ce0-9141-fd48481ff622"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.818291 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10cc0f8e-3b8e-4ce0-9141-fd48481ff622" (UID: "10cc0f8e-3b8e-4ce0-9141-fd48481ff622"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.863580 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.863621 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.863637 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.331239 4656 generic.go:334] "Generic (PLEG): container finished" podID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerID="f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd" exitCode=0 Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.331280 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerDied","Data":"f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd"} Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.334859 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerDied","Data":"3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd"} Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.334906 4656 scope.go:117] "RemoveContainer" containerID="37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.335049 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.385007 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.398680 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.404034 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.475530 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") pod \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.475611 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") pod \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.475669 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") pod \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.476592 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities" (OuterVolumeSpecName: "utilities") pod "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" (UID: "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.483401 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns" (OuterVolumeSpecName: "kube-api-access-qz7ns") pod "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" (UID: "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90"). InnerVolumeSpecName "kube-api-access-qz7ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.500254 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" (UID: "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.577094 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.577129 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.577138 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:20 crc kubenswrapper[4656]: I0128 15:34:20.349639 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerDied","Data":"0a1f1983ebbdfd9470548313b59bc006ed4c1ebef587436ac923240c5763c30c"} Jan 28 15:34:20 crc kubenswrapper[4656]: I0128 15:34:20.349761 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:20 crc kubenswrapper[4656]: I0128 15:34:20.380976 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:20 crc kubenswrapper[4656]: I0128 15:34:20.386550 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:21 crc kubenswrapper[4656]: I0128 15:34:21.574354 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" path="/var/lib/kubelet/pods/10cc0f8e-3b8e-4ce0-9141-fd48481ff622/volumes" Jan 28 15:34:21 crc kubenswrapper[4656]: I0128 15:34:21.575432 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" path="/var/lib/kubelet/pods/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90/volumes" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.054220 4656 scope.go:117] "RemoveContainer" containerID="b9e878f73cd88ccd02aece43db5b72158f4dc09151f8f8581608bf10b91e8667" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.706115 4656 scope.go:117] "RemoveContainer" containerID="5645c2dd6f454c80b7bbeb869d6c9e929d591ed8e12d98c6d8681e2f82ba32bb" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.728271 4656 scope.go:117] "RemoveContainer" containerID="f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.752416 4656 scope.go:117] "RemoveContainer" containerID="7ff58517afbd24556b20c54ce145108e3bbe303b3565c3ff9215649e73488d31" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.771455 4656 scope.go:117] "RemoveContainer" containerID="a38702ba1010780fceeb4e14ba15c9a6be53702dae80ef7db7486122c9680a4c" Jan 28 15:34:26 crc kubenswrapper[4656]: I0128 15:34:26.601888 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" event={"ID":"ab5fdcdc-7606-4e97-a65a-c98545c1a74a","Type":"ContainerStarted","Data":"12301810d801383635a6729a0fea2084d71eed671a8e76f6f559a4275845dd3f"} Jan 28 15:34:26 crc kubenswrapper[4656]: I0128 15:34:26.602522 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:26 crc kubenswrapper[4656]: I0128 15:34:26.632675 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" podStartSLOduration=2.340251692 podStartE2EDuration="15.632654528s" podCreationTimestamp="2026-01-28 15:34:11 +0000 UTC" firstStartedPulling="2026-01-28 15:34:12.396862071 +0000 UTC m=+942.905032865" lastFinishedPulling="2026-01-28 15:34:25.689264897 +0000 UTC m=+956.197435701" observedRunningTime="2026-01-28 15:34:26.631123454 +0000 UTC m=+957.139294248" watchObservedRunningTime="2026-01-28 15:34:26.632654528 +0000 UTC m=+957.140825332" Jan 28 15:34:31 crc kubenswrapper[4656]: I0128 15:34:31.898304 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.338720 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f"] Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339772 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339797 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339828 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339837 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339848 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="extract-content" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339856 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="extract-content" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339917 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="extract-content" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339926 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="extract-content" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339941 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="extract-utilities" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339949 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="extract-utilities" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339962 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="extract-utilities" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339969 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="extract-utilities" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.340150 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.340200 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.340911 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.345105 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ld9jz" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.352974 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.354005 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.358348 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-t87h2" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.371571 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.379505 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.397287 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.398117 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.410638 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8rqbd" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.437905 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.466629 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.467565 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.470921 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2txf\" (UniqueName: \"kubernetes.io/projected/0cf0a4ad-85dd-47df-9307-e469f075a098-kube-api-access-c2txf\") pod \"cinder-operator-controller-manager-f6487bd57-jwv7f\" (UID: \"0cf0a4ad-85dd-47df-9307-e469f075a098\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.470985 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhlmp\" (UniqueName: \"kubernetes.io/projected/6ce4cdbc-3227-4679-8da9-9fd537996bd7-kube-api-access-jhlmp\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-hd57q\" (UID: \"6ce4cdbc-3227-4679-8da9-9fd537996bd7\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.471698 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hc6jb" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.505236 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.506109 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.509315 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.510913 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kddk9" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.548613 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.557453 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.558436 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572420 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bk555" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572626 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhlmp\" (UniqueName: \"kubernetes.io/projected/6ce4cdbc-3227-4679-8da9-9fd537996bd7-kube-api-access-jhlmp\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-hd57q\" (UID: \"6ce4cdbc-3227-4679-8da9-9fd537996bd7\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572694 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4q4\" (UniqueName: \"kubernetes.io/projected/cfeab083-1268-47aa-938e-bd91036755de-kube-api-access-rn4q4\") pod \"designate-operator-controller-manager-66dfbd6f5d-r8cjw\" (UID: \"cfeab083-1268-47aa-938e-bd91036755de\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572739 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pck4m\" (UniqueName: \"kubernetes.io/projected/113ba11f-aeba-4710-b5f6-0991e9766d45-kube-api-access-pck4m\") pod \"glance-operator-controller-manager-6db5dbd896-cfpjq\" (UID: \"113ba11f-aeba-4710-b5f6-0991e9766d45\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572815 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2txf\" (UniqueName: \"kubernetes.io/projected/0cf0a4ad-85dd-47df-9307-e469f075a098-kube-api-access-c2txf\") pod \"cinder-operator-controller-manager-f6487bd57-jwv7f\" (UID: \"0cf0a4ad-85dd-47df-9307-e469f075a098\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.585088 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.586131 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.595799 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qc7x2" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.595997 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.620364 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.638380 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.646586 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2txf\" (UniqueName: \"kubernetes.io/projected/0cf0a4ad-85dd-47df-9307-e469f075a098-kube-api-access-c2txf\") pod \"cinder-operator-controller-manager-f6487bd57-jwv7f\" (UID: \"0cf0a4ad-85dd-47df-9307-e469f075a098\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.660640 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.671926 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhlmp\" (UniqueName: \"kubernetes.io/projected/6ce4cdbc-3227-4679-8da9-9fd537996bd7-kube-api-access-jhlmp\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-hd57q\" (UID: \"6ce4cdbc-3227-4679-8da9-9fd537996bd7\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.672295 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.673945 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4q4\" (UniqueName: \"kubernetes.io/projected/cfeab083-1268-47aa-938e-bd91036755de-kube-api-access-rn4q4\") pod \"designate-operator-controller-manager-66dfbd6f5d-r8cjw\" (UID: \"cfeab083-1268-47aa-938e-bd91036755de\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.674031 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pck4m\" (UniqueName: \"kubernetes.io/projected/113ba11f-aeba-4710-b5f6-0991e9766d45-kube-api-access-pck4m\") pod \"glance-operator-controller-manager-6db5dbd896-cfpjq\" (UID: \"113ba11f-aeba-4710-b5f6-0991e9766d45\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.674075 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz54t\" (UniqueName: \"kubernetes.io/projected/45be18b4-f249-4c09-8875-9959686d7f8f-kube-api-access-jz54t\") pod \"heat-operator-controller-manager-587c6bfdcf-xjnqt\" (UID: \"45be18b4-f249-4c09-8875-9959686d7f8f\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.674104 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l2vx\" (UniqueName: \"kubernetes.io/projected/1bfa2d1e-9ab0-478a-a19d-d031a1a8a312-kube-api-access-6l2vx\") pod \"horizon-operator-controller-manager-5fb775575f-9q5lw\" (UID: \"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.691407 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.692569 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.698565 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-w8b4l" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.735091 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pck4m\" (UniqueName: \"kubernetes.io/projected/113ba11f-aeba-4710-b5f6-0991e9766d45-kube-api-access-pck4m\") pod \"glance-operator-controller-manager-6db5dbd896-cfpjq\" (UID: \"113ba11f-aeba-4710-b5f6-0991e9766d45\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.756285 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4q4\" (UniqueName: \"kubernetes.io/projected/cfeab083-1268-47aa-938e-bd91036755de-kube-api-access-rn4q4\") pod \"designate-operator-controller-manager-66dfbd6f5d-r8cjw\" (UID: \"cfeab083-1268-47aa-938e-bd91036755de\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.774959 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz54t\" (UniqueName: \"kubernetes.io/projected/45be18b4-f249-4c09-8875-9959686d7f8f-kube-api-access-jz54t\") pod \"heat-operator-controller-manager-587c6bfdcf-xjnqt\" (UID: \"45be18b4-f249-4c09-8875-9959686d7f8f\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.775022 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l2vx\" (UniqueName: \"kubernetes.io/projected/1bfa2d1e-9ab0-478a-a19d-d031a1a8a312-kube-api-access-6l2vx\") pod \"horizon-operator-controller-manager-5fb775575f-9q5lw\" (UID: \"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.775057 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbm4n\" (UniqueName: \"kubernetes.io/projected/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-kube-api-access-vbm4n\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.775122 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.775548 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.776354 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.795215 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.795675 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.832349 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vp774" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.853653 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l2vx\" (UniqueName: \"kubernetes.io/projected/1bfa2d1e-9ab0-478a-a19d-d031a1a8a312-kube-api-access-6l2vx\") pod \"horizon-operator-controller-manager-5fb775575f-9q5lw\" (UID: \"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.853743 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.854592 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz54t\" (UniqueName: \"kubernetes.io/projected/45be18b4-f249-4c09-8875-9959686d7f8f-kube-api-access-jz54t\") pod \"heat-operator-controller-manager-587c6bfdcf-xjnqt\" (UID: \"45be18b4-f249-4c09-8875-9959686d7f8f\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.875572 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-7kctj"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.876624 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.878907 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkhq7\" (UniqueName: \"kubernetes.io/projected/ae47e69a-49f4-4b1a-8d68-068b5e99f22a-kube-api-access-rkhq7\") pod \"ironic-operator-controller-manager-958664b5-m9jtk\" (UID: \"ae47e69a-49f4-4b1a-8d68-068b5e99f22a\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.878978 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.879070 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbm4n\" (UniqueName: \"kubernetes.io/projected/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-kube-api-access-vbm4n\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.879110 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbknf\" (UniqueName: \"kubernetes.io/projected/a5bdaf78-b590-429f-bc9b-46c67a369456-kube-api-access-mbknf\") pod \"keystone-operator-controller-manager-7b84b46695-86ht2\" (UID: \"a5bdaf78-b590-429f-bc9b-46c67a369456\") " pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.879358 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.879494 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:34:51.379444491 +0000 UTC m=+981.887615295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.883317 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.884447 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.890688 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.893950 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vjs7q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.894261 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-czpv6" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.894655 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.908349 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.909440 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbm4n\" (UniqueName: \"kubernetes.io/projected/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-kube-api-access-vbm4n\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.909483 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.915695 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gg2dr" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.020425 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkhq7\" (UniqueName: \"kubernetes.io/projected/ae47e69a-49f4-4b1a-8d68-068b5e99f22a-kube-api-access-rkhq7\") pod \"ironic-operator-controller-manager-958664b5-m9jtk\" (UID: \"ae47e69a-49f4-4b1a-8d68-068b5e99f22a\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.039244 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.020613 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnlpq\" (UniqueName: \"kubernetes.io/projected/0a83428f-312c-4590-beb3-8da4994c8951-kube-api-access-dnlpq\") pod \"manila-operator-controller-manager-765668569f-7kctj\" (UID: \"0a83428f-312c-4590-beb3-8da4994c8951\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.045803 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kng9p\" (UniqueName: \"kubernetes.io/projected/9277e421-df3a-49a2-81cc-86d0f7c65809-kube-api-access-kng9p\") pod \"mariadb-operator-controller-manager-67bf948998-p92zm\" (UID: \"9277e421-df3a-49a2-81cc-86d0f7c65809\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.046038 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbknf\" (UniqueName: \"kubernetes.io/projected/a5bdaf78-b590-429f-bc9b-46c67a369456-kube-api-access-mbknf\") pod \"keystone-operator-controller-manager-7b84b46695-86ht2\" (UID: \"a5bdaf78-b590-429f-bc9b-46c67a369456\") " pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.066595 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-7kctj"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.127129 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.127372 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbknf\" (UniqueName: \"kubernetes.io/projected/a5bdaf78-b590-429f-bc9b-46c67a369456-kube-api-access-mbknf\") pod \"keystone-operator-controller-manager-7b84b46695-86ht2\" (UID: \"a5bdaf78-b590-429f-bc9b-46c67a369456\") " pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.129604 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.151654 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkhq7\" (UniqueName: \"kubernetes.io/projected/ae47e69a-49f4-4b1a-8d68-068b5e99f22a-kube-api-access-rkhq7\") pod \"ironic-operator-controller-manager-958664b5-m9jtk\" (UID: \"ae47e69a-49f4-4b1a-8d68-068b5e99f22a\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.151745 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155299 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnlpq\" (UniqueName: \"kubernetes.io/projected/0a83428f-312c-4590-beb3-8da4994c8951-kube-api-access-dnlpq\") pod \"manila-operator-controller-manager-765668569f-7kctj\" (UID: \"0a83428f-312c-4590-beb3-8da4994c8951\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155365 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kng9p\" (UniqueName: \"kubernetes.io/projected/9277e421-df3a-49a2-81cc-86d0f7c65809-kube-api-access-kng9p\") pod \"mariadb-operator-controller-manager-67bf948998-p92zm\" (UID: \"9277e421-df3a-49a2-81cc-86d0f7c65809\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155399 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155468 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zvdk\" (UniqueName: \"kubernetes.io/projected/9954b0be-71f8-430b-a61f-28a95404c0f7-kube-api-access-5zvdk\") pod \"neutron-operator-controller-manager-694c5bfc85-rjfbj\" (UID: \"9954b0be-71f8-430b-a61f-28a95404c0f7\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155804 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.161717 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.177407 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2qkmb" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.221060 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kng9p\" (UniqueName: \"kubernetes.io/projected/9277e421-df3a-49a2-81cc-86d0f7c65809-kube-api-access-kng9p\") pod \"mariadb-operator-controller-manager-67bf948998-p92zm\" (UID: \"9277e421-df3a-49a2-81cc-86d0f7c65809\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.228870 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnlpq\" (UniqueName: \"kubernetes.io/projected/0a83428f-312c-4590-beb3-8da4994c8951-kube-api-access-dnlpq\") pod \"manila-operator-controller-manager-765668569f-7kctj\" (UID: \"0a83428f-312c-4590-beb3-8da4994c8951\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.261093 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrvt\" (UniqueName: \"kubernetes.io/projected/f37006c8-da19-4d17-a6d5-f4b075f2220f-kube-api-access-wfrvt\") pod \"nova-operator-controller-manager-ddcbfd695-gqr2d\" (UID: \"f37006c8-da19-4d17-a6d5-f4b075f2220f\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.261253 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zvdk\" (UniqueName: \"kubernetes.io/projected/9954b0be-71f8-430b-a61f-28a95404c0f7-kube-api-access-5zvdk\") pod \"neutron-operator-controller-manager-694c5bfc85-rjfbj\" (UID: \"9954b0be-71f8-430b-a61f-28a95404c0f7\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.261905 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.295635 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.321835 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.321886 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.322341 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.332216 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.342138 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zvdk\" (UniqueName: \"kubernetes.io/projected/9954b0be-71f8-430b-a61f-28a95404c0f7-kube-api-access-5zvdk\") pod \"neutron-operator-controller-manager-694c5bfc85-rjfbj\" (UID: \"9954b0be-71f8-430b-a61f-28a95404c0f7\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.344791 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7r675" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.345139 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ts2hx" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.356306 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.366063 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfrvt\" (UniqueName: \"kubernetes.io/projected/f37006c8-da19-4d17-a6d5-f4b075f2220f-kube-api-access-wfrvt\") pod \"nova-operator-controller-manager-ddcbfd695-gqr2d\" (UID: \"f37006c8-da19-4d17-a6d5-f4b075f2220f\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.390527 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.422520 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.423453 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfrvt\" (UniqueName: \"kubernetes.io/projected/f37006c8-da19-4d17-a6d5-f4b075f2220f-kube-api-access-wfrvt\") pod \"nova-operator-controller-manager-ddcbfd695-gqr2d\" (UID: \"f37006c8-da19-4d17-a6d5-f4b075f2220f\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.429082 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.430585 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.457845 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.458904 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.459553 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.459917 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wvlbk" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.467862 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqzj\" (UniqueName: \"kubernetes.io/projected/50db0152-72c0-4fc3-9cd5-6b2c01127341-kube-api-access-hcqzj\") pod \"octavia-operator-controller-manager-5c765b4558-wjspj\" (UID: \"50db0152-72c0-4fc3-9cd5-6b2c01127341\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.467972 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndtpt\" (UniqueName: \"kubernetes.io/projected/132d53b6-84ec-44d6-8f8f-762e9595919e-kube-api-access-ndtpt\") pod \"ovn-operator-controller-manager-788c46999f-rmvr2\" (UID: \"132d53b6-84ec-44d6-8f8f-762e9595919e\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.468006 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:51 crc kubenswrapper[4656]: E0128 15:34:51.468149 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:51 crc kubenswrapper[4656]: E0128 15:34:51.468225 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:34:52.468206241 +0000 UTC m=+982.976377045 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.474225 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-r428b" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.479352 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.495742 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.499213 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-bxscm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.506977 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.507823 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.524424 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.526545 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4bg4d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.526874 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.536869 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.558760 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.568829 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcqzj\" (UniqueName: \"kubernetes.io/projected/50db0152-72c0-4fc3-9cd5-6b2c01127341-kube-api-access-hcqzj\") pod \"octavia-operator-controller-manager-5c765b4558-wjspj\" (UID: \"50db0152-72c0-4fc3-9cd5-6b2c01127341\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.568920 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6tv9\" (UniqueName: \"kubernetes.io/projected/92d1569e-5733-4779-b9fb-7feae2ea9317-kube-api-access-m6tv9\") pod \"placement-operator-controller-manager-5b964cf4cd-brxps\" (UID: \"92d1569e-5733-4779-b9fb-7feae2ea9317\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.568957 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmpqc\" (UniqueName: \"kubernetes.io/projected/5888a906-8758-4179-a30f-c2244ec46072-kube-api-access-qmpqc\") pod \"telemetry-operator-controller-manager-6d69b9c5db-nmjz8\" (UID: \"5888a906-8758-4179-a30f-c2244ec46072\") " pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.568990 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rclnj\" (UniqueName: \"kubernetes.io/projected/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-kube-api-access-rclnj\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.569030 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.569062 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndtpt\" (UniqueName: \"kubernetes.io/projected/132d53b6-84ec-44d6-8f8f-762e9595919e-kube-api-access-ndtpt\") pod \"ovn-operator-controller-manager-788c46999f-rmvr2\" (UID: \"132d53b6-84ec-44d6-8f8f-762e9595919e\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.569095 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mm9\" (UniqueName: \"kubernetes.io/projected/e97e04fa-1b66-4373-b31f-12089f1f5b2b-kube-api-access-p5mm9\") pod \"swift-operator-controller-manager-68fc8c869-9q9vg\" (UID: \"e97e04fa-1b66-4373-b31f-12089f1f5b2b\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.643858 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcqzj\" (UniqueName: \"kubernetes.io/projected/50db0152-72c0-4fc3-9cd5-6b2c01127341-kube-api-access-hcqzj\") pod \"octavia-operator-controller-manager-5c765b4558-wjspj\" (UID: \"50db0152-72c0-4fc3-9cd5-6b2c01127341\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.647654 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndtpt\" (UniqueName: \"kubernetes.io/projected/132d53b6-84ec-44d6-8f8f-762e9595919e-kube-api-access-ndtpt\") pod \"ovn-operator-controller-manager-788c46999f-rmvr2\" (UID: \"132d53b6-84ec-44d6-8f8f-762e9595919e\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.651191 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.655971 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.663155 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.669996 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6tv9\" (UniqueName: \"kubernetes.io/projected/92d1569e-5733-4779-b9fb-7feae2ea9317-kube-api-access-m6tv9\") pod \"placement-operator-controller-manager-5b964cf4cd-brxps\" (UID: \"92d1569e-5733-4779-b9fb-7feae2ea9317\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.670057 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmpqc\" (UniqueName: \"kubernetes.io/projected/5888a906-8758-4179-a30f-c2244ec46072-kube-api-access-qmpqc\") pod \"telemetry-operator-controller-manager-6d69b9c5db-nmjz8\" (UID: \"5888a906-8758-4179-a30f-c2244ec46072\") " pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.670097 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rclnj\" (UniqueName: \"kubernetes.io/projected/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-kube-api-access-rclnj\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.670157 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.670246 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mm9\" (UniqueName: \"kubernetes.io/projected/e97e04fa-1b66-4373-b31f-12089f1f5b2b-kube-api-access-p5mm9\") pod \"swift-operator-controller-manager-68fc8c869-9q9vg\" (UID: \"e97e04fa-1b66-4373-b31f-12089f1f5b2b\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:51 crc kubenswrapper[4656]: E0128 15:34:51.671200 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:51 crc kubenswrapper[4656]: E0128 15:34:51.671261 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:52.171243843 +0000 UTC m=+982.679414647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.671430 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.672447 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.682402 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fzdw5" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.690244 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.694535 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.703332 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.704825 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.710855 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vvjhc" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.712333 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5mm9\" (UniqueName: \"kubernetes.io/projected/e97e04fa-1b66-4373-b31f-12089f1f5b2b-kube-api-access-p5mm9\") pod \"swift-operator-controller-manager-68fc8c869-9q9vg\" (UID: \"e97e04fa-1b66-4373-b31f-12089f1f5b2b\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.716285 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.724024 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmpqc\" (UniqueName: \"kubernetes.io/projected/5888a906-8758-4179-a30f-c2244ec46072-kube-api-access-qmpqc\") pod \"telemetry-operator-controller-manager-6d69b9c5db-nmjz8\" (UID: \"5888a906-8758-4179-a30f-c2244ec46072\") " pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.731819 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rclnj\" (UniqueName: \"kubernetes.io/projected/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-kube-api-access-rclnj\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.735356 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6tv9\" (UniqueName: \"kubernetes.io/projected/92d1569e-5733-4779-b9fb-7feae2ea9317-kube-api-access-m6tv9\") pod \"placement-operator-controller-manager-5b964cf4cd-brxps\" (UID: \"92d1569e-5733-4779-b9fb-7feae2ea9317\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.771873 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgcdr\" (UniqueName: \"kubernetes.io/projected/36524b9c-daa2-46d2-a732-b0964bb08873-kube-api-access-rgcdr\") pod \"watcher-operator-controller-manager-767b8bc766-xlrqs\" (UID: \"36524b9c-daa2-46d2-a732-b0964bb08873\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.771928 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58j57\" (UniqueName: \"kubernetes.io/projected/d903ea5b-f13e-43d5-b65b-44093c70ddee-kube-api-access-58j57\") pod \"test-operator-controller-manager-56f8bfcd9f-bxkwv\" (UID: \"d903ea5b-f13e-43d5-b65b-44093c70ddee\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.781341 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.782299 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.794230 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.794443 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2qnf5" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.795068 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.809101 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.837294 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966246 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966382 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgcdr\" (UniqueName: \"kubernetes.io/projected/36524b9c-daa2-46d2-a732-b0964bb08873-kube-api-access-rgcdr\") pod \"watcher-operator-controller-manager-767b8bc766-xlrqs\" (UID: \"36524b9c-daa2-46d2-a732-b0964bb08873\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966485 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58j57\" (UniqueName: \"kubernetes.io/projected/d903ea5b-f13e-43d5-b65b-44093c70ddee-kube-api-access-58j57\") pod \"test-operator-controller-manager-56f8bfcd9f-bxkwv\" (UID: \"d903ea5b-f13e-43d5-b65b-44093c70ddee\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966570 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966823 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj2qx\" (UniqueName: \"kubernetes.io/projected/010cc4f5-4ac8-46e0-be08-80218981003e-kube-api-access-nj2qx\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.985987 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.987813 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.998446 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" event={"ID":"0cf0a4ad-85dd-47df-9307-e469f075a098","Type":"ContainerStarted","Data":"f2f1e0bb3ca27f3ebf55820f7580846953fbac34e758150c04de281d1465ff73"} Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.030842 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.032114 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.032117 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58j57\" (UniqueName: \"kubernetes.io/projected/d903ea5b-f13e-43d5-b65b-44093c70ddee-kube-api-access-58j57\") pod \"test-operator-controller-manager-56f8bfcd9f-bxkwv\" (UID: \"d903ea5b-f13e-43d5-b65b-44093c70ddee\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.040331 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.042442 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l2w7h" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.042446 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgcdr\" (UniqueName: \"kubernetes.io/projected/36524b9c-daa2-46d2-a732-b0964bb08873-kube-api-access-rgcdr\") pod \"watcher-operator-controller-manager-767b8bc766-xlrqs\" (UID: \"36524b9c-daa2-46d2-a732-b0964bb08873\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.053342 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.069474 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.069556 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj2qx\" (UniqueName: \"kubernetes.io/projected/010cc4f5-4ac8-46e0-be08-80218981003e-kube-api-access-nj2qx\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.069637 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88ck8\" (UniqueName: \"kubernetes.io/projected/0bb42d6d-259a-4532-b3e2-732c0f271d9a-kube-api-access-88ck8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sqgs8\" (UID: \"0bb42d6d-259a-4532-b3e2-732c0f271d9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.069666 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.069664 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.070208 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:52.570187942 +0000 UTC m=+983.078358746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.070133 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.070852 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:52.57083686 +0000 UTC m=+983.079007664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.083521 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.106569 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj2qx\" (UniqueName: \"kubernetes.io/projected/010cc4f5-4ac8-46e0-be08-80218981003e-kube-api-access-nj2qx\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.124171 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.170697 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88ck8\" (UniqueName: \"kubernetes.io/projected/0bb42d6d-259a-4532-b3e2-732c0f271d9a-kube-api-access-88ck8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sqgs8\" (UID: \"0bb42d6d-259a-4532-b3e2-732c0f271d9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.192476 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88ck8\" (UniqueName: \"kubernetes.io/projected/0bb42d6d-259a-4532-b3e2-732c0f271d9a-kube-api-access-88ck8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sqgs8\" (UID: \"0bb42d6d-259a-4532-b3e2-732c0f271d9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.271708 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.271924 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.271976 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:53.271961997 +0000 UTC m=+983.780132801 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.473083 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.479672 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.483913 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.484137 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.484213 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:34:54.484198484 +0000 UTC m=+984.992369288 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.487885 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod113ba11f_aeba_4710_b5f6_0991e9766d45.slice/crio-524e363e6badba1c95dda78bfe6f3f3a4e21928dc90ae7745399a092bbc4a39c WatchSource:0}: Error finding container 524e363e6badba1c95dda78bfe6f3f3a4e21928dc90ae7745399a092bbc4a39c: Status 404 returned error can't find the container with id 524e363e6badba1c95dda78bfe6f3f3a4e21928dc90ae7745399a092bbc4a39c Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.522432 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.542134 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw"] Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.568491 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfeab083_1268_47aa_938e_bd91036755de.slice/crio-8f7ed47cd17305ebe1bca1e7dc31291922d5c6a643c8b01969e7c2207c353147 WatchSource:0}: Error finding container 8f7ed47cd17305ebe1bca1e7dc31291922d5c6a643c8b01969e7c2207c353147: Status 404 returned error can't find the container with id 8f7ed47cd17305ebe1bca1e7dc31291922d5c6a643c8b01969e7c2207c353147 Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.576031 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ce4cdbc_3227_4679_8da9_9fd537996bd7.slice/crio-42a1354663bf4071a1659ff2622f41942c0de3af3d77c2049efd0c4c138a16aa WatchSource:0}: Error finding container 42a1354663bf4071a1659ff2622f41942c0de3af3d77c2049efd0c4c138a16aa: Status 404 returned error can't find the container with id 42a1354663bf4071a1659ff2622f41942c0de3af3d77c2049efd0c4c138a16aa Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.586377 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.586476 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.586718 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.586794 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:53.586773861 +0000 UTC m=+984.094944665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.587324 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.587379 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:53.587366188 +0000 UTC m=+984.095536992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.727706 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.781779 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw"] Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.797614 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bfa2d1e_9ab0_478a_a19d_d031a1a8a312.slice/crio-4aa89c6005a02a290de18324861d96b92a7ce9f0c01261703d27fcb4e69057aa WatchSource:0}: Error finding container 4aa89c6005a02a290de18324861d96b92a7ce9f0c01261703d27fcb4e69057aa: Status 404 returned error can't find the container with id 4aa89c6005a02a290de18324861d96b92a7ce9f0c01261703d27fcb4e69057aa Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.802320 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2"] Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.818444 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5bdaf78_b590_429f_bc9b_46c67a369456.slice/crio-cefcc29db34c4ed3a690281892bd148cdbfd6f69de2de0b764dbf7090127c5ba WatchSource:0}: Error finding container cefcc29db34c4ed3a690281892bd148cdbfd6f69de2de0b764dbf7090127c5ba: Status 404 returned error can't find the container with id cefcc29db34c4ed3a690281892bd148cdbfd6f69de2de0b764dbf7090127c5ba Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.006946 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" event={"ID":"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312","Type":"ContainerStarted","Data":"4aa89c6005a02a290de18324861d96b92a7ce9f0c01261703d27fcb4e69057aa"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.011917 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" event={"ID":"cfeab083-1268-47aa-938e-bd91036755de","Type":"ContainerStarted","Data":"8f7ed47cd17305ebe1bca1e7dc31291922d5c6a643c8b01969e7c2207c353147"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.020124 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" event={"ID":"113ba11f-aeba-4710-b5f6-0991e9766d45","Type":"ContainerStarted","Data":"524e363e6badba1c95dda78bfe6f3f3a4e21928dc90ae7745399a092bbc4a39c"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.022431 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" event={"ID":"6ce4cdbc-3227-4679-8da9-9fd537996bd7","Type":"ContainerStarted","Data":"42a1354663bf4071a1659ff2622f41942c0de3af3d77c2049efd0c4c138a16aa"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.024368 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" event={"ID":"45be18b4-f249-4c09-8875-9959686d7f8f","Type":"ContainerStarted","Data":"84f597ea9c518a35c17623de73d211d5cc8d5c40af45b11cc6bbf1f274811720"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.026907 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" event={"ID":"a5bdaf78-b590-429f-bc9b-46c67a369456","Type":"ContainerStarted","Data":"cefcc29db34c4ed3a690281892bd148cdbfd6f69de2de0b764dbf7090127c5ba"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.196651 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.228589 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-7kctj"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.241883 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.300422 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.303364 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.303616 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.303731 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:55.303702314 +0000 UTC m=+985.811873118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.332145 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.342282 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2"] Jan 28 15:34:53 crc kubenswrapper[4656]: W0128 15:34:53.346651 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf37006c8_da19_4d17_a6d5_f4b075f2220f.slice/crio-28586ddab44f4bda0bbcda1f1c0df8e9a79f1719c86e0713082fde0da7ec19ff WatchSource:0}: Error finding container 28586ddab44f4bda0bbcda1f1c0df8e9a79f1719c86e0713082fde0da7ec19ff: Status 404 returned error can't find the container with id 28586ddab44f4bda0bbcda1f1c0df8e9a79f1719c86e0713082fde0da7ec19ff Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.350455 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.355431 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg"] Jan 28 15:34:53 crc kubenswrapper[4656]: W0128 15:34:53.363021 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod132d53b6_84ec_44d6_8f8f_762e9595919e.slice/crio-d875be74bd4a0979b68acb9161d4c83fa04df1baaed9a4603b9ac7762ead48c0 WatchSource:0}: Error finding container d875be74bd4a0979b68acb9161d4c83fa04df1baaed9a4603b9ac7762ead48c0: Status 404 returned error can't find the container with id d875be74bd4a0979b68acb9161d4c83fa04df1baaed9a4603b9ac7762ead48c0 Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.365988 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.375497 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.378628 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.385232 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.390659 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8"] Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.430895 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6tv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-brxps_openstack-operators(92d1569e-5733-4779-b9fb-7feae2ea9317): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.432707 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:34:53 crc kubenswrapper[4656]: W0128 15:34:53.434359 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode97e04fa_1b66_4373_b31f_12089f1f5b2b.slice/crio-97b340278e2bb40f0d7d83fefb29266f7ab14c4a223e3d1e4d99bfd17a9f6cf0 WatchSource:0}: Error finding container 97b340278e2bb40f0d7d83fefb29266f7ab14c4a223e3d1e4d99bfd17a9f6cf0: Status 404 returned error can't find the container with id 97b340278e2bb40f0d7d83fefb29266f7ab14c4a223e3d1e4d99bfd17a9f6cf0 Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.435118 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58j57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-bxkwv_openstack-operators(d903ea5b-f13e-43d5-b65b-44093c70ddee): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.442770 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.508759 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sqgs8_openstack-operators(0bb42d6d-259a-4532-b3e2-732c0f271d9a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.519189 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5mm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-9q9vg_openstack-operators(e97e04fa-1b66-4373-b31f-12089f1f5b2b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.519833 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmpqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d69b9c5db-nmjz8_openstack-operators(5888a906-8758-4179-a30f-c2244ec46072): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.520351 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.528855 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.529007 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.626043 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.626209 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.626396 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.626466 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:55.626445397 +0000 UTC m=+986.134616201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.626865 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.626898 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:55.626890539 +0000 UTC m=+986.135061343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.037740 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" event={"ID":"0bb42d6d-259a-4532-b3e2-732c0f271d9a","Type":"ContainerStarted","Data":"fcdc5e1eb16256a60f9e9a1658d315a52e64c752300d9d4dfd3b913ced29cb48"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.039274 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" event={"ID":"36524b9c-daa2-46d2-a732-b0964bb08873","Type":"ContainerStarted","Data":"734e772f82404bdf09aa164624787ad8615ea04c812298bed5986036b0dff158"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.043523 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.048661 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" event={"ID":"f37006c8-da19-4d17-a6d5-f4b075f2220f","Type":"ContainerStarted","Data":"28586ddab44f4bda0bbcda1f1c0df8e9a79f1719c86e0713082fde0da7ec19ff"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.053818 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" event={"ID":"9954b0be-71f8-430b-a61f-28a95404c0f7","Type":"ContainerStarted","Data":"b523f4627da4d961e55e0ca354187359238aec0f72b438929321c01470630a54"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.055704 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" event={"ID":"d903ea5b-f13e-43d5-b65b-44093c70ddee","Type":"ContainerStarted","Data":"fb83bd714aaf5342241f75311dd5ed2767d4d421fe0639a9c3c1f63b595dea7b"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.057889 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.058973 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" event={"ID":"92d1569e-5733-4779-b9fb-7feae2ea9317","Type":"ContainerStarted","Data":"5b692085648e1fd94d9ffc63fc4391e244a5562d6eafb7dd6fb2fccf54f5e206"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.062479 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.063644 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" event={"ID":"ae47e69a-49f4-4b1a-8d68-068b5e99f22a","Type":"ContainerStarted","Data":"c3260c66ea07ec03ee65adbe4433875a08cad4e356327f3835038f8dbf7bf90c"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.065071 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" event={"ID":"132d53b6-84ec-44d6-8f8f-762e9595919e","Type":"ContainerStarted","Data":"d875be74bd4a0979b68acb9161d4c83fa04df1baaed9a4603b9ac7762ead48c0"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.068945 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" event={"ID":"0a83428f-312c-4590-beb3-8da4994c8951","Type":"ContainerStarted","Data":"1cc886493ea1a5aed490f85e19648c22a5327acc64ec656181951a16c9b99519"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.070728 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" event={"ID":"9277e421-df3a-49a2-81cc-86d0f7c65809","Type":"ContainerStarted","Data":"1f0d7b09e6fa64c21c251d4962b51ffbe4cd6c93feb51d936871b12b81ae87c4"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.072005 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" event={"ID":"5888a906-8758-4179-a30f-c2244ec46072","Type":"ContainerStarted","Data":"a3dcfab8c0cf196ae0aba68ef6d314f42a0da045728fe19a25d079389b063406"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.074739 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.077974 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" event={"ID":"50db0152-72c0-4fc3-9cd5-6b2c01127341","Type":"ContainerStarted","Data":"baceeba5404b339d1d658ace96297a872c3635685cca84d1659fded0b5def497"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.079498 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" event={"ID":"e97e04fa-1b66-4373-b31f-12089f1f5b2b","Type":"ContainerStarted","Data":"97b340278e2bb40f0d7d83fefb29266f7ab14c4a223e3d1e4d99bfd17a9f6cf0"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.081485 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.545291 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.545499 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.545597 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:34:58.545577017 +0000 UTC m=+989.053747821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094115 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094541 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094605 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094715 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094737 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:34:55 crc kubenswrapper[4656]: I0128 15:34:55.358456 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.358735 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.358803 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:59.358783226 +0000 UTC m=+989.866954020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: I0128 15:34:55.662597 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.662812 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.663410 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:59.663383315 +0000 UTC m=+990.171554119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: I0128 15:34:55.663334 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.663679 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.663828 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:59.663803147 +0000 UTC m=+990.171974031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:34:58 crc kubenswrapper[4656]: I0128 15:34:58.621331 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:58 crc kubenswrapper[4656]: E0128 15:34:58.621542 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:58 crc kubenswrapper[4656]: E0128 15:34:58.621910 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:35:06.621882776 +0000 UTC m=+997.130053580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: I0128 15:34:59.364788 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.365124 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.365211 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:07.36519024 +0000 UTC m=+997.873361044 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: I0128 15:34:59.685707 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:59 crc kubenswrapper[4656]: I0128 15:34:59.685859 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.686252 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.686331 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:07.686312285 +0000 UTC m=+998.194483079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.686856 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.686900 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:07.686891722 +0000 UTC m=+998.195062526 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:35:06 crc kubenswrapper[4656]: I0128 15:35:06.689851 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:06 crc kubenswrapper[4656]: I0128 15:35:06.695707 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:06 crc kubenswrapper[4656]: I0128 15:35:06.810756 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.399196 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.407735 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.695328 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.703331 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.703426 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.713895 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.716925 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.756480 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:10 crc kubenswrapper[4656]: E0128 15:35:10.696462 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:22665b40ffeef62d1a612c1f9f0fa8e97ff95085fad123895d786b770f421fc0" Jan 28 15:35:10 crc kubenswrapper[4656]: E0128 15:35:10.697066 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:22665b40ffeef62d1a612c1f9f0fa8e97ff95085fad123895d786b770f421fc0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zvdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-694c5bfc85-rjfbj_openstack-operators(9954b0be-71f8-430b-a61f-28a95404c0f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:10 crc kubenswrapper[4656]: E0128 15:35:10.698283 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" podUID="9954b0be-71f8-430b-a61f-28a95404c0f7" Jan 28 15:35:11 crc kubenswrapper[4656]: I0128 15:35:11.269803 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:35:11 crc kubenswrapper[4656]: I0128 15:35:11.270220 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:35:11 crc kubenswrapper[4656]: E0128 15:35:11.419154 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:22665b40ffeef62d1a612c1f9f0fa8e97ff95085fad123895d786b770f421fc0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" podUID="9954b0be-71f8-430b-a61f-28a95404c0f7" Jan 28 15:35:12 crc kubenswrapper[4656]: E0128 15:35:12.089599 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da" Jan 28 15:35:12 crc kubenswrapper[4656]: E0128 15:35:12.089869 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnlpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-765668569f-7kctj_openstack-operators(0a83428f-312c-4590-beb3-8da4994c8951): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:12 crc kubenswrapper[4656]: E0128 15:35:12.091045 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" podUID="0a83428f-312c-4590-beb3-8da4994c8951" Jan 28 15:35:12 crc kubenswrapper[4656]: E0128 15:35:12.405028 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da\\\"\"" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" podUID="0a83428f-312c-4590-beb3-8da4994c8951" Jan 28 15:35:13 crc kubenswrapper[4656]: E0128 15:35:13.347742 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 28 15:35:13 crc kubenswrapper[4656]: E0128 15:35:13.347973 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kng9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-p92zm_openstack-operators(9277e421-df3a-49a2-81cc-86d0f7c65809): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:13 crc kubenswrapper[4656]: E0128 15:35:13.349213 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" podUID="9277e421-df3a-49a2-81cc-86d0f7c65809" Jan 28 15:35:13 crc kubenswrapper[4656]: E0128 15:35:13.411849 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" podUID="9277e421-df3a-49a2-81cc-86d0f7c65809" Jan 28 15:35:14 crc kubenswrapper[4656]: E0128 15:35:14.860866 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d" Jan 28 15:35:14 crc kubenswrapper[4656]: E0128 15:35:14.861285 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rn4q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66dfbd6f5d-r8cjw_openstack-operators(cfeab083-1268-47aa-938e-bd91036755de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:14 crc kubenswrapper[4656]: E0128 15:35:14.862830 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" podUID="cfeab083-1268-47aa-938e-bd91036755de" Jan 28 15:35:15 crc kubenswrapper[4656]: E0128 15:35:15.447951 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" podUID="cfeab083-1268-47aa-938e-bd91036755de" Jan 28 15:35:15 crc kubenswrapper[4656]: E0128 15:35:15.503384 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4" Jan 28 15:35:15 crc kubenswrapper[4656]: E0128 15:35:15.503885 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rgcdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-767b8bc766-xlrqs_openstack-operators(36524b9c-daa2-46d2-a732-b0964bb08873): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:15 crc kubenswrapper[4656]: E0128 15:35:15.506344 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" podUID="36524b9c-daa2-46d2-a732-b0964bb08873" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.116714 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:5f48b6af05a584d3da5c973f83195d999cc151aa0f187cabc8002cb46d60afe5" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.117403 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:5f48b6af05a584d3da5c973f83195d999cc151aa0f187cabc8002cb46d60afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkhq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-958664b5-m9jtk_openstack-operators(ae47e69a-49f4-4b1a-8d68-068b5e99f22a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.118600 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" podUID="ae47e69a-49f4-4b1a-8d68-068b5e99f22a" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.447506 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" podUID="36524b9c-daa2-46d2-a732-b0964bb08873" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.447793 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:5f48b6af05a584d3da5c973f83195d999cc151aa0f187cabc8002cb46d60afe5\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" podUID="ae47e69a-49f4-4b1a-8d68-068b5e99f22a" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.836026 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.836311 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6l2vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-9q5lw_openstack-operators(1bfa2d1e-9ab0-478a-a19d-d031a1a8a312): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.837763 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" podUID="1bfa2d1e-9ab0-478a-a19d-d031a1a8a312" Jan 28 15:35:17 crc kubenswrapper[4656]: E0128 15:35:17.455093 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" podUID="1bfa2d1e-9ab0-478a-a19d-d031a1a8a312" Jan 28 15:35:19 crc kubenswrapper[4656]: E0128 15:35:19.989041 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:6da7ec7bf701fe1dd489852a16429f163a69073fae67b872dca4b080cc3514ad" Jan 28 15:35:19 crc kubenswrapper[4656]: E0128 15:35:19.989470 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:6da7ec7bf701fe1dd489852a16429f163a69073fae67b872dca4b080cc3514ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2txf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-f6487bd57-jwv7f_openstack-operators(0cf0a4ad-85dd-47df-9307-e469f075a098): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:19 crc kubenswrapper[4656]: E0128 15:35:19.991123 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" podUID="0cf0a4ad-85dd-47df-9307-e469f075a098" Jan 28 15:35:20 crc kubenswrapper[4656]: E0128 15:35:20.473948 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:6da7ec7bf701fe1dd489852a16429f163a69073fae67b872dca4b080cc3514ad\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" podUID="0cf0a4ad-85dd-47df-9307-e469f075a098" Jan 28 15:35:20 crc kubenswrapper[4656]: E0128 15:35:20.502982 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:8a7e2637765333c555b0b932c2bfc789235aea2c7276961657a03ef1352a7264" Jan 28 15:35:20 crc kubenswrapper[4656]: E0128 15:35:20.503290 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:8a7e2637765333c555b0b932c2bfc789235aea2c7276961657a03ef1352a7264,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pck4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-6db5dbd896-cfpjq_openstack-operators(113ba11f-aeba-4710-b5f6-0991e9766d45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:20 crc kubenswrapper[4656]: E0128 15:35:20.505170 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" podUID="113ba11f-aeba-4710-b5f6-0991e9766d45" Jan 28 15:35:21 crc kubenswrapper[4656]: E0128 15:35:21.488086 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:8a7e2637765333c555b0b932c2bfc789235aea2c7276961657a03ef1352a7264\\\"\"" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" podUID="113ba11f-aeba-4710-b5f6-0991e9766d45" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.106639 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:c7804813a3bba8910a47a5f32bd528335e18397f93cf5f7e7181d3d2c209b59b" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.106938 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:c7804813a3bba8910a47a5f32bd528335e18397f93cf5f7e7181d3d2c209b59b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcqzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5c765b4558-wjspj_openstack-operators(50db0152-72c0-4fc3-9cd5-6b2c01127341): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.108155 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" podUID="50db0152-72c0-4fc3-9cd5-6b2c01127341" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.486745 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:c7804813a3bba8910a47a5f32bd528335e18397f93cf5f7e7181d3d2c209b59b\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" podUID="50db0152-72c0-4fc3-9cd5-6b2c01127341" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.766205 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:f832a7a2326f1b84e7963fdea324e2a5285d636b366f059465c98299ae2d2d63" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.766502 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:f832a7a2326f1b84e7963fdea324e2a5285d636b366f059465c98299ae2d2d63,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mbknf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b84b46695-86ht2_openstack-operators(a5bdaf78-b590-429f-bc9b-46c67a369456): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.767739 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" podUID="a5bdaf78-b590-429f-bc9b-46c67a369456" Jan 28 15:35:23 crc kubenswrapper[4656]: E0128 15:35:23.493787 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:f832a7a2326f1b84e7963fdea324e2a5285d636b366f059465c98299ae2d2d63\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" podUID="a5bdaf78-b590-429f-bc9b-46c67a369456" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.071431 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.072145 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58j57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-bxkwv_openstack-operators(d903ea5b-f13e-43d5-b65b-44093c70ddee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.074072 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.743698 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.744113 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5mm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-9q9vg_openstack-operators(e97e04fa-1b66-4373-b31f-12089f1f5b2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.745501 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:35:27 crc kubenswrapper[4656]: I0128 15:35:27.177526 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:35:29 crc kubenswrapper[4656]: E0128 15:35:29.417715 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba" Jan 28 15:35:29 crc kubenswrapper[4656]: E0128 15:35:29.417980 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmpqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d69b9c5db-nmjz8_openstack-operators(5888a906-8758-4179-a30f-c2244ec46072): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:29 crc kubenswrapper[4656]: E0128 15:35:29.419256 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:35:30 crc kubenswrapper[4656]: E0128 15:35:30.113456 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 28 15:35:30 crc kubenswrapper[4656]: E0128 15:35:30.113755 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6tv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-brxps_openstack-operators(92d1569e-5733-4779-b9fb-7feae2ea9317): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:30 crc kubenswrapper[4656]: E0128 15:35:30.116127 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.009783 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.010081 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wfrvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-ddcbfd695-gqr2d_openstack-operators(f37006c8-da19-4d17-a6d5-f4b075f2220f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.011993 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" podUID="f37006c8-da19-4d17-a6d5-f4b075f2220f" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.667461 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.668298 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sqgs8_openstack-operators(0bb42d6d-259a-4532-b3e2-732c0f271d9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.669661 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:35:32 crc kubenswrapper[4656]: E0128 15:35:32.165448 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" podUID="f37006c8-da19-4d17-a6d5-f4b075f2220f" Jan 28 15:35:32 crc kubenswrapper[4656]: I0128 15:35:32.863047 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn"] Jan 28 15:35:32 crc kubenswrapper[4656]: I0128 15:35:32.922245 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p"] Jan 28 15:35:33 crc kubenswrapper[4656]: I0128 15:35:33.047762 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv"] Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.587206 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" event={"ID":"7341d49c-e9a9-4108-8a2c-bf808ccb49cf","Type":"ContainerStarted","Data":"8a24e9b05ad66110a394f610b8ea77f4909847e91224300529898871d9721271"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.593452 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" event={"ID":"132d53b6-84ec-44d6-8f8f-762e9595919e","Type":"ContainerStarted","Data":"06bc9aef0fd98a9893b51d390852a16af89e5d3814041c2bfbae1fe1a03756ac"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.594543 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.600640 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" event={"ID":"6ce4cdbc-3227-4679-8da9-9fd537996bd7","Type":"ContainerStarted","Data":"6de222a7b02d2f681bf300c9ff4ba45c9a6ce04f723e561461887b4652f25b25"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.600894 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.603528 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" event={"ID":"010cc4f5-4ac8-46e0-be08-80218981003e","Type":"ContainerStarted","Data":"d69367ab05162c41e71aa5477a4a928046a3d4abb6a9d3032ff4f1501dc11c5f"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.613593 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" event={"ID":"3dcf45d4-628c-4071-b732-8ade2d3c4b4e","Type":"ContainerStarted","Data":"da1921127dbed16d5469f5358682b34de29126e475b84f45dc4de7de13b32d70"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.660454 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" podStartSLOduration=14.476789286 podStartE2EDuration="44.660424572s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.578029709 +0000 UTC m=+983.086200513" lastFinishedPulling="2026-01-28 15:35:22.761664995 +0000 UTC m=+1013.269835799" observedRunningTime="2026-01-28 15:35:34.656373845 +0000 UTC m=+1025.164544649" watchObservedRunningTime="2026-01-28 15:35:34.660424572 +0000 UTC m=+1025.168595376" Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.664078 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" podStartSLOduration=14.267700502 podStartE2EDuration="43.664060457s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.365465745 +0000 UTC m=+983.873636549" lastFinishedPulling="2026-01-28 15:35:22.7618257 +0000 UTC m=+1013.269996504" observedRunningTime="2026-01-28 15:35:34.621711716 +0000 UTC m=+1025.129882520" watchObservedRunningTime="2026-01-28 15:35:34.664060457 +0000 UTC m=+1025.172231261" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.628516 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" event={"ID":"9954b0be-71f8-430b-a61f-28a95404c0f7","Type":"ContainerStarted","Data":"8f9368524f080563ff26e3aae2c72d048be1b02b0b1c8966c195f138c8517388"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.630202 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.636334 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" event={"ID":"0cf0a4ad-85dd-47df-9307-e469f075a098","Type":"ContainerStarted","Data":"aaf265a5d181e454037130389243c88ca45edeb1e3ebce32ee221f6918fb280f"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.636595 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.639333 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" event={"ID":"0a83428f-312c-4590-beb3-8da4994c8951","Type":"ContainerStarted","Data":"2fef2fe2438c62dfe85f51ac45a0fcf784ca834800a760f77dd8d118a210ac68"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.640078 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.641878 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" event={"ID":"a5bdaf78-b590-429f-bc9b-46c67a369456","Type":"ContainerStarted","Data":"3efb0e87471a3f77df1f1448c729d0de9b0fadbfd0eef8fb9d1282a564a62a3f"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.642091 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.655137 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" event={"ID":"ae47e69a-49f4-4b1a-8d68-068b5e99f22a","Type":"ContainerStarted","Data":"7d846cf0ddef2965d73cfcf62a660b04dfb82183415597e509fb6c8210cf1355"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.655425 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.675752 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" event={"ID":"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312","Type":"ContainerStarted","Data":"947cad0aab5477fd30979ad90cc08830578b2eff12acc1814e39ea1a3a77fac4"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.676521 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.696077 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" podStartSLOduration=4.822803829 podStartE2EDuration="45.696036571s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.266384429 +0000 UTC m=+983.774555233" lastFinishedPulling="2026-01-28 15:35:34.139617171 +0000 UTC m=+1024.647787975" observedRunningTime="2026-01-28 15:35:35.680292677 +0000 UTC m=+1026.188463491" watchObservedRunningTime="2026-01-28 15:35:35.696036571 +0000 UTC m=+1026.204207375" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.705510 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" event={"ID":"45be18b4-f249-4c09-8875-9959686d7f8f","Type":"ContainerStarted","Data":"ce8da8f6bc7bc99ea80525ab5ecd55ccd2672da02192e3c07f559ec13b4390de"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.705798 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.708935 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" event={"ID":"36524b9c-daa2-46d2-a732-b0964bb08873","Type":"ContainerStarted","Data":"26972cd96158dc81f8523c3f1d1d8c68a072edba0524da7cbe6b604eccc61174"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.709761 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.710814 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" event={"ID":"9277e421-df3a-49a2-81cc-86d0f7c65809","Type":"ContainerStarted","Data":"6919851838ff21eaaa344e9ba0a4e2476e0a23a9f277058bfbf2eff559b11944"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.711391 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.712412 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" event={"ID":"cfeab083-1268-47aa-938e-bd91036755de","Type":"ContainerStarted","Data":"30f239c2e7c47eacbbfd6d4271678d731a84ce3799905d86890140eda7ffbc51"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.712746 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.714937 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" event={"ID":"113ba11f-aeba-4710-b5f6-0991e9766d45","Type":"ContainerStarted","Data":"2a341b3a274d22a62e3733c2fe078d1994c17cef4e4d8446367851ebcbfb8447"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.715305 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.716923 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" event={"ID":"010cc4f5-4ac8-46e0-be08-80218981003e","Type":"ContainerStarted","Data":"3af250c74fb21a73f8d5a5cabf839d42eb8fb09350569a0d13d833257a899042"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.716946 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.910497 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" podStartSLOduration=2.941901776 podStartE2EDuration="45.910471991s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:51.998619839 +0000 UTC m=+982.506790643" lastFinishedPulling="2026-01-28 15:35:34.967190044 +0000 UTC m=+1025.475360858" observedRunningTime="2026-01-28 15:35:35.885601815 +0000 UTC m=+1026.393772609" watchObservedRunningTime="2026-01-28 15:35:35.910471991 +0000 UTC m=+1026.418642795" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.914869 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" podStartSLOduration=4.968962171 podStartE2EDuration="45.914843787s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.321125867 +0000 UTC m=+983.829296671" lastFinishedPulling="2026-01-28 15:35:34.267007483 +0000 UTC m=+1024.775178287" observedRunningTime="2026-01-28 15:35:35.910801571 +0000 UTC m=+1026.418972375" watchObservedRunningTime="2026-01-28 15:35:35.914843787 +0000 UTC m=+1026.423014591" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.944739 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" podStartSLOduration=3.813766146 podStartE2EDuration="45.944715498s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.821596219 +0000 UTC m=+983.329767023" lastFinishedPulling="2026-01-28 15:35:34.952545571 +0000 UTC m=+1025.460716375" observedRunningTime="2026-01-28 15:35:35.943907095 +0000 UTC m=+1026.452077899" watchObservedRunningTime="2026-01-28 15:35:35.944715498 +0000 UTC m=+1026.452886302" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.031342 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" podStartSLOduration=5.170970405 podStartE2EDuration="46.031325675s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.299436482 +0000 UTC m=+983.807607286" lastFinishedPulling="2026-01-28 15:35:34.159791742 +0000 UTC m=+1024.667962556" observedRunningTime="2026-01-28 15:35:35.98189 +0000 UTC m=+1026.490060804" watchObservedRunningTime="2026-01-28 15:35:36.031325675 +0000 UTC m=+1026.539496469" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.034263 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" podStartSLOduration=4.568727836 podStartE2EDuration="46.034253879s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.801049177 +0000 UTC m=+983.309219981" lastFinishedPulling="2026-01-28 15:35:34.26657522 +0000 UTC m=+1024.774746024" observedRunningTime="2026-01-28 15:35:36.02837953 +0000 UTC m=+1026.536550334" watchObservedRunningTime="2026-01-28 15:35:36.034253879 +0000 UTC m=+1026.542424683" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.101425 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" podStartSLOduration=3.688565207 podStartE2EDuration="46.101398984s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.558559468 +0000 UTC m=+983.066730272" lastFinishedPulling="2026-01-28 15:35:34.971393245 +0000 UTC m=+1025.479564049" observedRunningTime="2026-01-28 15:35:36.09674718 +0000 UTC m=+1026.604917994" watchObservedRunningTime="2026-01-28 15:35:36.101398984 +0000 UTC m=+1026.609569788" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.210660 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" podStartSLOduration=9.553388433 podStartE2EDuration="46.210623782s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.755231846 +0000 UTC m=+983.263402650" lastFinishedPulling="2026-01-28 15:35:29.412467155 +0000 UTC m=+1019.920637999" observedRunningTime="2026-01-28 15:35:36.206287687 +0000 UTC m=+1026.714458521" watchObservedRunningTime="2026-01-28 15:35:36.210623782 +0000 UTC m=+1026.718794586" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.312138 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" podStartSLOduration=45.312122458 podStartE2EDuration="45.312122458s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:35:36.308059841 +0000 UTC m=+1026.816230635" watchObservedRunningTime="2026-01-28 15:35:36.312122458 +0000 UTC m=+1026.820293262" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.313198 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" podStartSLOduration=4.36823662 podStartE2EDuration="45.313193539s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.408356371 +0000 UTC m=+983.916527175" lastFinishedPulling="2026-01-28 15:35:34.35331329 +0000 UTC m=+1024.861484094" observedRunningTime="2026-01-28 15:35:36.264791984 +0000 UTC m=+1026.772962788" watchObservedRunningTime="2026-01-28 15:35:36.313193539 +0000 UTC m=+1026.821364343" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.337270 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" podStartSLOduration=4.621088065 podStartE2EDuration="46.337244542s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.570645326 +0000 UTC m=+983.078816130" lastFinishedPulling="2026-01-28 15:35:34.286801803 +0000 UTC m=+1024.794972607" observedRunningTime="2026-01-28 15:35:36.335714768 +0000 UTC m=+1026.843885592" watchObservedRunningTime="2026-01-28 15:35:36.337244542 +0000 UTC m=+1026.845415346" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.384965 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" podStartSLOduration=5.514798654 podStartE2EDuration="46.384935707s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.397285682 +0000 UTC m=+983.905456486" lastFinishedPulling="2026-01-28 15:35:34.267422735 +0000 UTC m=+1024.775593539" observedRunningTime="2026-01-28 15:35:36.378571133 +0000 UTC m=+1026.886741937" watchObservedRunningTime="2026-01-28 15:35:36.384935707 +0000 UTC m=+1026.893106511" Jan 28 15:35:37 crc kubenswrapper[4656]: E0128 15:35:37.172609 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:35:38 crc kubenswrapper[4656]: E0128 15:35:38.179211 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.759425 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" event={"ID":"50db0152-72c0-4fc3-9cd5-6b2c01127341","Type":"ContainerStarted","Data":"94e3f215727027354e291f5bf9e4840840f6c34110ac5719885d13b2fc239d38"} Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.762206 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.764101 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" event={"ID":"3dcf45d4-628c-4071-b732-8ade2d3c4b4e","Type":"ContainerStarted","Data":"810a9250e2ddc43ffba09e5337730db8e84bd558ac00b7b4993fe50130dd0d7a"} Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.764837 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.766860 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" event={"ID":"7341d49c-e9a9-4108-8a2c-bf808ccb49cf","Type":"ContainerStarted","Data":"e19cc7d0ef107f8fa8c6b91e277cddf9c497e21c3ea8fbf9a42e7239228205eb"} Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.767139 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.781838 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" podStartSLOduration=4.397470039 podStartE2EDuration="49.781808392s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.214881374 +0000 UTC m=+983.723052178" lastFinishedPulling="2026-01-28 15:35:38.599219727 +0000 UTC m=+1029.107390531" observedRunningTime="2026-01-28 15:35:39.777811847 +0000 UTC m=+1030.285982661" watchObservedRunningTime="2026-01-28 15:35:39.781808392 +0000 UTC m=+1030.289979206" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.816542 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" podStartSLOduration=43.817412036 podStartE2EDuration="48.816506132s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:35:34.260784443 +0000 UTC m=+1024.768955247" lastFinishedPulling="2026-01-28 15:35:39.259878539 +0000 UTC m=+1029.768049343" observedRunningTime="2026-01-28 15:35:39.809937223 +0000 UTC m=+1030.318108037" watchObservedRunningTime="2026-01-28 15:35:39.816506132 +0000 UTC m=+1030.324676936" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.848387 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" podStartSLOduration=44.845192887 podStartE2EDuration="49.84836548s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:35:34.261047891 +0000 UTC m=+1024.769218695" lastFinishedPulling="2026-01-28 15:35:39.264220484 +0000 UTC m=+1029.772391288" observedRunningTime="2026-01-28 15:35:39.837983251 +0000 UTC m=+1030.346154075" watchObservedRunningTime="2026-01-28 15:35:39.84836548 +0000 UTC m=+1030.356536284" Jan 28 15:35:40 crc kubenswrapper[4656]: I0128 15:35:40.672501 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:35:40 crc kubenswrapper[4656]: I0128 15:35:40.674974 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:35:40 crc kubenswrapper[4656]: I0128 15:35:40.803883 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:35:40 crc kubenswrapper[4656]: I0128 15:35:40.898326 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.050641 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.131880 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.158945 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.263865 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.264362 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.264903 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.359843 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.425035 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.529721 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.697861 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:35:42 crc kubenswrapper[4656]: I0128 15:35:42.127652 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:35:44 crc kubenswrapper[4656]: E0128 15:35:44.172438 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:35:44 crc kubenswrapper[4656]: E0128 15:35:44.172454 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:35:45 crc kubenswrapper[4656]: E0128 15:35:45.172692 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:35:46 crc kubenswrapper[4656]: I0128 15:35:46.816909 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:47 crc kubenswrapper[4656]: I0128 15:35:47.702602 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:47 crc kubenswrapper[4656]: I0128 15:35:47.775663 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:48 crc kubenswrapper[4656]: I0128 15:35:48.841914 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" event={"ID":"d903ea5b-f13e-43d5-b65b-44093c70ddee","Type":"ContainerStarted","Data":"d7c9ea7da01181d4c970b8e1f7644761eacbcbc39d44a66a216d26e1920fbdd7"} Jan 28 15:35:48 crc kubenswrapper[4656]: I0128 15:35:48.843978 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:35:48 crc kubenswrapper[4656]: I0128 15:35:48.863001 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podStartSLOduration=2.606713173 podStartE2EDuration="57.862978642s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.434654459 +0000 UTC m=+983.942825273" lastFinishedPulling="2026-01-28 15:35:48.690919938 +0000 UTC m=+1039.199090742" observedRunningTime="2026-01-28 15:35:48.857956909 +0000 UTC m=+1039.366127733" watchObservedRunningTime="2026-01-28 15:35:48.862978642 +0000 UTC m=+1039.371149456" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.666493 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.869592 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" event={"ID":"e97e04fa-1b66-4373-b31f-12089f1f5b2b","Type":"ContainerStarted","Data":"8db670b605dc99fdaaeead63f9fceb62223aae6d0ef80bd608d29ec722223f3e"} Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.870345 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.871438 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" event={"ID":"f37006c8-da19-4d17-a6d5-f4b075f2220f","Type":"ContainerStarted","Data":"4c87cbfa892f70b190b81ced3ca1efcfbd78b32612776036a134abf7eaff7182"} Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.871692 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.900645 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podStartSLOduration=3.190736296 podStartE2EDuration="1m0.900625316s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.518969499 +0000 UTC m=+984.027140303" lastFinishedPulling="2026-01-28 15:35:51.228858519 +0000 UTC m=+1041.737029323" observedRunningTime="2026-01-28 15:35:51.895388936 +0000 UTC m=+1042.403559750" watchObservedRunningTime="2026-01-28 15:35:51.900625316 +0000 UTC m=+1042.408796110" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.924501 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" podStartSLOduration=4.061959886 podStartE2EDuration="1m1.924481486s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.365134985 +0000 UTC m=+983.873305789" lastFinishedPulling="2026-01-28 15:35:51.227656585 +0000 UTC m=+1041.735827389" observedRunningTime="2026-01-28 15:35:51.921012157 +0000 UTC m=+1042.429182971" watchObservedRunningTime="2026-01-28 15:35:51.924481486 +0000 UTC m=+1042.432652290" Jan 28 15:35:58 crc kubenswrapper[4656]: I0128 15:35:58.961499 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" event={"ID":"5888a906-8758-4179-a30f-c2244ec46072","Type":"ContainerStarted","Data":"cf12d8018e9f695069c711c5279902bbbe135cbdb396676262bc13eeaaab94f3"} Jan 28 15:35:58 crc kubenswrapper[4656]: I0128 15:35:58.962461 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:35:58 crc kubenswrapper[4656]: I0128 15:35:58.979569 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podStartSLOduration=3.50138402 podStartE2EDuration="1m7.979547479s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.51970106 +0000 UTC m=+984.027871864" lastFinishedPulling="2026-01-28 15:35:57.997864519 +0000 UTC m=+1048.506035323" observedRunningTime="2026-01-28 15:35:58.975478853 +0000 UTC m=+1049.483649667" watchObservedRunningTime="2026-01-28 15:35:58.979547479 +0000 UTC m=+1049.487718273" Jan 28 15:35:59 crc kubenswrapper[4656]: I0128 15:35:59.970821 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" event={"ID":"0bb42d6d-259a-4532-b3e2-732c0f271d9a","Type":"ContainerStarted","Data":"f32e28c76d4fe3fbdcfa35c3d3dbcf7521926127938cfec3194fb719800529f1"} Jan 28 15:35:59 crc kubenswrapper[4656]: I0128 15:35:59.973289 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" event={"ID":"92d1569e-5733-4779-b9fb-7feae2ea9317","Type":"ContainerStarted","Data":"8823d6a69ee5db129ec37ce7257dffd633c2931225beaa105a9c8bf3d8e7dd9d"} Jan 28 15:35:59 crc kubenswrapper[4656]: I0128 15:35:59.973578 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:35:59 crc kubenswrapper[4656]: I0128 15:35:59.992390 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podStartSLOduration=3.53110897 podStartE2EDuration="1m8.992360708s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.490966432 +0000 UTC m=+983.999137236" lastFinishedPulling="2026-01-28 15:35:58.95221817 +0000 UTC m=+1049.460388974" observedRunningTime="2026-01-28 15:35:59.986667746 +0000 UTC m=+1050.494838550" watchObservedRunningTime="2026-01-28 15:35:59.992360708 +0000 UTC m=+1050.500531512" Jan 28 15:36:00 crc kubenswrapper[4656]: I0128 15:36:00.011246 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podStartSLOduration=2.750269414 podStartE2EDuration="1m9.011222206s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.430665574 +0000 UTC m=+983.938836378" lastFinishedPulling="2026-01-28 15:35:59.691618366 +0000 UTC m=+1050.199789170" observedRunningTime="2026-01-28 15:36:00.010079773 +0000 UTC m=+1050.518250577" watchObservedRunningTime="2026-01-28 15:36:00.011222206 +0000 UTC m=+1050.519393010" Jan 28 15:36:01 crc kubenswrapper[4656]: I0128 15:36:01.529809 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:36:02 crc kubenswrapper[4656]: I0128 15:36:02.057898 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:36:02 crc kubenswrapper[4656]: I0128 15:36:02.090752 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.264500 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.265213 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.265292 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.266079 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.266216 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f" gracePeriod=600 Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.814189 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.991095 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:36:12 crc kubenswrapper[4656]: I0128 15:36:12.122463 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f" exitCode=0 Jan 28 15:36:12 crc kubenswrapper[4656]: I0128 15:36:12.122544 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f"} Jan 28 15:36:12 crc kubenswrapper[4656]: I0128 15:36:12.122837 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441"} Jan 28 15:36:12 crc kubenswrapper[4656]: I0128 15:36:12.122956 4656 scope.go:117] "RemoveContainer" containerID="87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.741247 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.742944 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.746899 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.748253 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.748555 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5dbss" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.752999 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.774350 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.801224 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.801308 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.888134 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.893334 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.896059 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.902908 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.903019 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.903758 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.909742 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.942648 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.004756 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.004806 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.005120 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.066518 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.106917 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.106983 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.107061 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.108075 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.108717 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.124674 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.210499 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.828405 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:36:28 crc kubenswrapper[4656]: W0128 15:36:28.897479 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a88808_6249_4879_b857_55182475c4a5.slice/crio-f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc WatchSource:0}: Error finding container f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc: Status 404 returned error can't find the container with id f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.898154 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:36:29 crc kubenswrapper[4656]: I0128 15:36:29.249628 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" event={"ID":"b8a88808-6249-4879-b857-55182475c4a5","Type":"ContainerStarted","Data":"f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc"} Jan 28 15:36:29 crc kubenswrapper[4656]: I0128 15:36:29.250997 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" event={"ID":"cc574140-1aed-42c6-baab-b39625a3ae3b","Type":"ContainerStarted","Data":"b9ce7457339510772a4afabfed4cb6250dc208ca5dccec3d956588e40ccb2fc7"} Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.077906 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.113461 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.115191 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.168885 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.257504 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.257627 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.257690 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.361134 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.361244 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.361312 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.362853 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.363664 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.396745 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.511656 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.754706 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.802740 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.813209 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.831703 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.975940 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.976248 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.976273 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.077622 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.077664 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.077746 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.078838 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.079352 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.134964 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.148069 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.211748 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.313045 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.318682 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.328683 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-w59zb" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.336062 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.351547 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.351774 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.351904 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.352059 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.353453 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.408080 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" event={"ID":"c1211ebf-bf2b-422a-8bf6-8ff685d27325","Type":"ContainerStarted","Data":"63a5615d0933628808006a1d7c767190e6e914053a00c94d355b26e0c1287a0b"} Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.414056 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497062 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497119 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497138 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497168 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497194 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07f26e32-4b43-4591-9ed2-6426a96e596e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497326 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnqw5\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-kube-api-access-cnqw5\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497386 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497455 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497476 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497583 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497667 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07f26e32-4b43-4591-9ed2-6426a96e596e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599649 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599726 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07f26e32-4b43-4591-9ed2-6426a96e596e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599756 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599784 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599801 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599817 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599833 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07f26e32-4b43-4591-9ed2-6426a96e596e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599861 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnqw5\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-kube-api-access-cnqw5\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599876 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599899 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599919 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.600304 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.601083 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.601634 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.604323 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.604546 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.624884 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.626526 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.627718 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07f26e32-4b43-4591-9ed2-6426a96e596e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.628603 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07f26e32-4b43-4591-9ed2-6426a96e596e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.629277 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.668969 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.684230 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnqw5\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-kube-api-access-cnqw5\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.954069 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.955285 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.023141 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.025657 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.029038 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.038044 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.039143 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2swg5" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.043706 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.043885 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.043705 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.043944 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.054600 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127694 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127770 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2239f1cd-f384-40df-9f71-a46caf290038-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127802 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127830 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz557\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-kube-api-access-bz557\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127877 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-config-data\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127910 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127939 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2239f1cd-f384-40df-9f71-a46caf290038-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127976 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127992 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.128016 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.128043 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229215 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229282 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229304 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229338 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2239f1cd-f384-40df-9f71-a46caf290038-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229367 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229404 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz557\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-kube-api-access-bz557\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229476 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-config-data\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229520 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229554 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2239f1cd-f384-40df-9f71-a46caf290038-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229597 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229620 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.230173 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.230353 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.232888 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.233794 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-config-data\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.234745 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.235979 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.236382 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.238141 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2239f1cd-f384-40df-9f71-a46caf290038-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.240542 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2239f1cd-f384-40df-9f71-a46caf290038-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.240963 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.258743 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz557\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-kube-api-access-bz557\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.275561 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.393572 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.448778 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" event={"ID":"4d610f18-f075-4fdd-9618-c807584a0d12","Type":"ContainerStarted","Data":"f223adfd127e099f079d9c57cf03da22cee668c8861dac91721c1feb6c571fac"} Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.780218 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:36:32 crc kubenswrapper[4656]: W0128 15:36:32.845895 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07f26e32_4b43_4591_9ed2_6426a96e596e.slice/crio-7dc87b329510cdfa8b14270f027da873db175ebfbaed4b1590d5c1cc63026cba WatchSource:0}: Error finding container 7dc87b329510cdfa8b14270f027da873db175ebfbaed4b1590d5c1cc63026cba: Status 404 returned error can't find the container with id 7dc87b329510cdfa8b14270f027da873db175ebfbaed4b1590d5c1cc63026cba Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.025406 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:36:33 crc kubenswrapper[4656]: W0128 15:36:33.074916 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2239f1cd_f384_40df_9f71_a46caf290038.slice/crio-8e113ef39b9d7a5f8d98d7908e05b72256bce4e8b213a6ad67dc4229bb887959 WatchSource:0}: Error finding container 8e113ef39b9d7a5f8d98d7908e05b72256bce4e8b213a6ad67dc4229bb887959: Status 404 returned error can't find the container with id 8e113ef39b9d7a5f8d98d7908e05b72256bce4e8b213a6ad67dc4229bb887959 Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.202496 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.203875 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.203991 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.215629 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.216097 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gjcq4" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.216305 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.218678 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.219316 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.354927 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355287 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355319 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355355 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355388 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355430 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355507 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355552 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbd5f\" (UniqueName: \"kubernetes.io/projected/8e41d89b-8943-4aec-9e33-00db569a2ce8-kube-api-access-qbd5f\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.456928 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457007 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457055 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbd5f\" (UniqueName: \"kubernetes.io/projected/8e41d89b-8943-4aec-9e33-00db569a2ce8-kube-api-access-qbd5f\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457158 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457284 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457315 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457379 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457406 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.459354 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.459418 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.459544 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.459712 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.464952 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.466369 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.467722 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.480407 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbd5f\" (UniqueName: \"kubernetes.io/projected/8e41d89b-8943-4aec-9e33-00db569a2ce8-kube-api-access-qbd5f\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.510235 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.552459 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.560468 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2239f1cd-f384-40df-9f71-a46caf290038","Type":"ContainerStarted","Data":"8e113ef39b9d7a5f8d98d7908e05b72256bce4e8b213a6ad67dc4229bb887959"} Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.568439 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07f26e32-4b43-4591-9ed2-6426a96e596e","Type":"ContainerStarted","Data":"7dc87b329510cdfa8b14270f027da873db175ebfbaed4b1590d5c1cc63026cba"} Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.624649 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.901857 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.902820 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.917618 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.917902 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.918032 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dk9jh" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.010919 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-config-data\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.010976 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kolla-config\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.011024 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmdzt\" (UniqueName: \"kubernetes.io/projected/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kube-api-access-nmdzt\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.011076 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.011096 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.028614 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.112947 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.112995 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.113035 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-config-data\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.113058 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kolla-config\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.113100 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmdzt\" (UniqueName: \"kubernetes.io/projected/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kube-api-access-nmdzt\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.114786 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-config-data\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.115603 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kolla-config\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.146945 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.165828 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmdzt\" (UniqueName: \"kubernetes.io/projected/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kube-api-access-nmdzt\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.169886 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.250631 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.679759 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e41d89b-8943-4aec-9e33-00db569a2ce8","Type":"ContainerStarted","Data":"3e1b56011b4053bdfce648887f94f25347b29dd451dd102b7d360e36a13a97e6"} Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.783640 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.790591 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.807689 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.820898 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.821122 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.821184 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8p555" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.821832 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.860277 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 15:36:35 crc kubenswrapper[4656]: W0128 15:36:35.925557 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf25455cb_6f99_4958_b7bd_9fa56e45f6e1.slice/crio-3fae74a8425e13bf3a9007e7c925bf149a0245b65c6c1a59f53e092a2b7f8cdc WatchSource:0}: Error finding container 3fae74a8425e13bf3a9007e7c925bf149a0245b65c6c1a59f53e092a2b7f8cdc: Status 404 returned error can't find the container with id 3fae74a8425e13bf3a9007e7c925bf149a0245b65c6c1a59f53e092a2b7f8cdc Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943587 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hvs4\" (UniqueName: \"kubernetes.io/projected/6a46bc21-63f0-461d-b33d-ec98cb059408-kube-api-access-6hvs4\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943650 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943727 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943758 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943951 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943981 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.944021 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.944056 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.044905 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hvs4\" (UniqueName: \"kubernetes.io/projected/6a46bc21-63f0-461d-b33d-ec98cb059408-kube-api-access-6hvs4\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.044955 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045016 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045036 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045086 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045101 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045143 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045182 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045883 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.046224 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.046450 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.047212 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.047333 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.051939 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.077069 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hvs4\" (UniqueName: \"kubernetes.io/projected/6a46bc21-63f0-461d-b33d-ec98cb059408-kube-api-access-6hvs4\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.083348 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.148745 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.436629 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.731398 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f25455cb-6f99-4958-b7bd-9fa56e45f6e1","Type":"ContainerStarted","Data":"3fae74a8425e13bf3a9007e7c925bf149a0245b65c6c1a59f53e092a2b7f8cdc"} Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.275743 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.413750 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.416519 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.420614 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vn9b9" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.513013 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.514932 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp9sh\" (UniqueName: \"kubernetes.io/projected/7eff8e04-7afc-4a92-998f-db692ece65e7-kube-api-access-cp9sh\") pod \"kube-state-metrics-0\" (UID: \"7eff8e04-7afc-4a92-998f-db692ece65e7\") " pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.618679 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp9sh\" (UniqueName: \"kubernetes.io/projected/7eff8e04-7afc-4a92-998f-db692ece65e7-kube-api-access-cp9sh\") pod \"kube-state-metrics-0\" (UID: \"7eff8e04-7afc-4a92-998f-db692ece65e7\") " pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.671589 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp9sh\" (UniqueName: \"kubernetes.io/projected/7eff8e04-7afc-4a92-998f-db692ece65e7-kube-api-access-cp9sh\") pod \"kube-state-metrics-0\" (UID: \"7eff8e04-7afc-4a92-998f-db692ece65e7\") " pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.758612 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.825487 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a46bc21-63f0-461d-b33d-ec98cb059408","Type":"ContainerStarted","Data":"fbd72bea68a7781922434bc7f920fbe71b31713688c7e0bd6a5494f1c674275f"} Jan 28 15:36:38 crc kubenswrapper[4656]: I0128 15:36:38.776867 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:36:38 crc kubenswrapper[4656]: W0128 15:36:38.823542 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eff8e04_7afc_4a92_998f_db692ece65e7.slice/crio-7d1c788b0bdecd35d3e97f7e5df617fd25b751c56801dea0ee91f2799c8baf48 WatchSource:0}: Error finding container 7d1c788b0bdecd35d3e97f7e5df617fd25b751c56801dea0ee91f2799c8baf48: Status 404 returned error can't find the container with id 7d1c788b0bdecd35d3e97f7e5df617fd25b751c56801dea0ee91f2799c8baf48 Jan 28 15:36:39 crc kubenswrapper[4656]: I0128 15:36:39.871334 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7eff8e04-7afc-4a92-998f-db692ece65e7","Type":"ContainerStarted","Data":"7d1c788b0bdecd35d3e97f7e5df617fd25b751c56801dea0ee91f2799c8baf48"} Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.319876 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.324203 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.331634 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-lmk2h" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.333125 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.333356 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.333567 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.343345 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.343563 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7pppv"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.344627 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.353294 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.353785 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-c6cm2" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.354026 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.366518 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.377562 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-28hwk"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.386678 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.394863 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7pppv"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.403656 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-28hwk"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424223 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/681fa692-9a54-4d03-a31c-952409143c4f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424265 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424286 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-config\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424312 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424348 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjpl\" (UniqueName: \"kubernetes.io/projected/681fa692-9a54-4d03-a31c-952409143c4f-kube-api-access-vgjpl\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424420 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424520 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424548 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525843 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525902 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5pg\" (UniqueName: \"kubernetes.io/projected/beab0392-2167-4283-97ae-12498c5d02c1-kube-api-access-px5pg\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525942 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gt2w\" (UniqueName: \"kubernetes.io/projected/4815f130-4106-456b-9bcb-b34536d9ddc9-kube-api-access-4gt2w\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525974 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525990 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-ovn-controller-tls-certs\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526026 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526063 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beab0392-2167-4283-97ae-12498c5d02c1-scripts\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526123 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526147 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4815f130-4106-456b-9bcb-b34536d9ddc9-scripts\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526184 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526208 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-etc-ovs\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526229 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/681fa692-9a54-4d03-a31c-952409143c4f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526249 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526272 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-config\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526293 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-lib\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526311 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526331 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-log\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526367 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjpl\" (UniqueName: \"kubernetes.io/projected/681fa692-9a54-4d03-a31c-952409143c4f-kube-api-access-vgjpl\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526388 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-log-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526403 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-combined-ca-bundle\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526423 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-run\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.530897 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.532711 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.533310 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/681fa692-9a54-4d03-a31c-952409143c4f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.536762 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-config\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.569514 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.569614 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.570652 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.577007 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.589759 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjpl\" (UniqueName: \"kubernetes.io/projected/681fa692-9a54-4d03-a31c-952409143c4f-kube-api-access-vgjpl\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633231 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-lib\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633275 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-log\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633310 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-log-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633331 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-combined-ca-bundle\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633354 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-run\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633374 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633394 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px5pg\" (UniqueName: \"kubernetes.io/projected/beab0392-2167-4283-97ae-12498c5d02c1-kube-api-access-px5pg\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633434 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gt2w\" (UniqueName: \"kubernetes.io/projected/4815f130-4106-456b-9bcb-b34536d9ddc9-kube-api-access-4gt2w\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633462 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633478 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-ovn-controller-tls-certs\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633506 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beab0392-2167-4283-97ae-12498c5d02c1-scripts\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633549 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4815f130-4106-456b-9bcb-b34536d9ddc9-scripts\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633565 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-etc-ovs\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.634117 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-etc-ovs\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.634314 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-lib\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.634436 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-log\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.634508 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-log-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.635110 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-run\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.635173 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.642660 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-ovn-controller-tls-certs\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.642817 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.654642 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4815f130-4106-456b-9bcb-b34536d9ddc9-scripts\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.663964 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-combined-ca-bundle\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.664346 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beab0392-2167-4283-97ae-12498c5d02c1-scripts\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.675099 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gt2w\" (UniqueName: \"kubernetes.io/projected/4815f130-4106-456b-9bcb-b34536d9ddc9-kube-api-access-4gt2w\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.675352 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px5pg\" (UniqueName: \"kubernetes.io/projected/beab0392-2167-4283-97ae-12498c5d02c1-kube-api-access-px5pg\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.715751 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.725324 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.749378 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.300459 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.302100 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.304416 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wtkr6" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.305393 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.306204 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.310565 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.325957 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398333 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398736 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398810 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99d74\" (UniqueName: \"kubernetes.io/projected/da949f76-8013-4824-bda9-0656b43920b5-kube-api-access-99d74\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398861 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398885 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da949f76-8013-4824-bda9-0656b43920b5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398975 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.399081 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-config\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.399138 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501281 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99d74\" (UniqueName: \"kubernetes.io/projected/da949f76-8013-4824-bda9-0656b43920b5-kube-api-access-99d74\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501348 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501378 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da949f76-8013-4824-bda9-0656b43920b5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501493 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501633 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-config\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501696 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501775 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501808 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.504153 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-config\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.504468 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.505236 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.506095 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da949f76-8013-4824-bda9-0656b43920b5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.508546 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.510840 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.517907 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.529054 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99d74\" (UniqueName: \"kubernetes.io/projected/da949f76-8013-4824-bda9-0656b43920b5-kube-api-access-99d74\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.532319 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.632591 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.890836 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.891779 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hvs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(6a46bc21-63f0-461d-b33d-ec98cb059408): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.893072 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="6a46bc21-63f0-461d-b33d-ec98cb059408" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.945111 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.945522 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbd5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(8e41d89b-8943-4aec-9e33-00db569a2ce8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.946816 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="8e41d89b-8943-4aec-9e33-00db569a2ce8" Jan 28 15:37:03 crc kubenswrapper[4656]: E0128 15:37:03.275064 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="8e41d89b-8943-4aec-9e33-00db569a2ce8" Jan 28 15:37:03 crc kubenswrapper[4656]: E0128 15:37:03.275630 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="6a46bc21-63f0-461d-b33d-ec98cb059408" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.173729 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.173981 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnqw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(07f26e32-4b43-4591-9ed2-6426a96e596e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.175428 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="07f26e32-4b43-4591-9ed2-6426a96e596e" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.282688 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="07f26e32-4b43-4591-9ed2-6426a96e596e" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.943736 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.944024 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5ddh5f9h655h57bh5d6h96h5bch59fh686h5c6h68ch56bh5f7h9bhfh5dch665hdchf5h584h696h74hc5h569h5fbh5bh54h94h64ch98h65hf9q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmdzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(f25455cb-6f99-4958-b7bd-9fa56e45f6e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.945363 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="f25455cb-6f99-4958-b7bd-9fa56e45f6e1" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.974103 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.974426 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz557,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2239f1cd-f384-40df-9f71-a46caf290038): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.976156 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2239f1cd-f384-40df-9f71-a46caf290038" Jan 28 15:37:05 crc kubenswrapper[4656]: E0128 15:37:05.290565 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2239f1cd-f384-40df-9f71-a46caf290038" Jan 28 15:37:05 crc kubenswrapper[4656]: E0128 15:37:05.294295 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="f25455cb-6f99-4958-b7bd-9fa56e45f6e1" Jan 28 15:37:10 crc kubenswrapper[4656]: I0128 15:37:10.857254 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-28hwk"] Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.869098 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.869655 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5v8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-59sjg_openstack(b8a88808-6249-4879-b857-55182475c4a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.870812 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" podUID="b8a88808-6249-4879-b857-55182475c4a5" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.881413 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.881598 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlsqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7lfkl_openstack(cc574140-1aed-42c6-baab-b39625a3ae3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.882862 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" podUID="cc574140-1aed-42c6-baab-b39625a3ae3b" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.885026 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.885225 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l874,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-vnkgc_openstack(c1211ebf-bf2b-422a-8bf6-8ff685d27325): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.886677 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.953842 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.953989 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztrdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-w2f4q_openstack(4d610f18-f075-4fdd-9618-c807584a0d12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.955521 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" podUID="4d610f18-f075-4fdd-9618-c807584a0d12" Jan 28 15:37:11 crc kubenswrapper[4656]: I0128 15:37:11.368625 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7pppv"] Jan 28 15:37:11 crc kubenswrapper[4656]: I0128 15:37:11.372725 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerStarted","Data":"43d4c3c043f701685e050b0ac171acad9522a575c3f66b091b921b3560befa27"} Jan 28 15:37:11 crc kubenswrapper[4656]: E0128 15:37:11.374823 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" podUID="4d610f18-f075-4fdd-9618-c807584a0d12" Jan 28 15:37:11 crc kubenswrapper[4656]: E0128 15:37:11.375747 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" Jan 28 15:37:11 crc kubenswrapper[4656]: I0128 15:37:11.654340 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.192713 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.192766 4656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.192922 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cp9sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(7eff8e04-7afc-4a92-998f-db692ece65e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.194552 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="7eff8e04-7afc-4a92-998f-db692ece65e7" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.242694 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.258080 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.352210 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:37:12 crc kubenswrapper[4656]: W0128 15:37:12.359120 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda949f76_8013_4824_bda9_0656b43920b5.slice/crio-31bbcf00d650f0e412864859f03bd0ede953c8294dbffa34782563edd52a7fa3 WatchSource:0}: Error finding container 31bbcf00d650f0e412864859f03bd0ede953c8294dbffa34782563edd52a7fa3: Status 404 returned error can't find the container with id 31bbcf00d650f0e412864859f03bd0ede953c8294dbffa34782563edd52a7fa3 Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.381315 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"da949f76-8013-4824-bda9-0656b43920b5","Type":"ContainerStarted","Data":"31bbcf00d650f0e412864859f03bd0ede953c8294dbffa34782563edd52a7fa3"} Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.382845 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"681fa692-9a54-4d03-a31c-952409143c4f","Type":"ContainerStarted","Data":"28fc1ab98d1cef9f48510102443528037c1500c7d9bd171fc47c3a1bd2c07074"} Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.386472 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" event={"ID":"b8a88808-6249-4879-b857-55182475c4a5","Type":"ContainerDied","Data":"f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc"} Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.386668 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.392104 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" event={"ID":"cc574140-1aed-42c6-baab-b39625a3ae3b","Type":"ContainerDied","Data":"b9ce7457339510772a4afabfed4cb6250dc208ca5dccec3d956588e40ccb2fc7"} Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.392147 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.395503 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv" event={"ID":"4815f130-4106-456b-9bcb-b34536d9ddc9","Type":"ContainerStarted","Data":"ee88d12a32cad34a07005af6c62f7bd52ba5f4beb93a283feec77fb9bfebd34b"} Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.397317 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="7eff8e04-7afc-4a92-998f-db692ece65e7" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406323 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") pod \"b8a88808-6249-4879-b857-55182475c4a5\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406393 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") pod \"cc574140-1aed-42c6-baab-b39625a3ae3b\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406441 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") pod \"cc574140-1aed-42c6-baab-b39625a3ae3b\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406469 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") pod \"cc574140-1aed-42c6-baab-b39625a3ae3b\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406533 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") pod \"b8a88808-6249-4879-b857-55182475c4a5\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.407184 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config" (OuterVolumeSpecName: "config") pod "b8a88808-6249-4879-b857-55182475c4a5" (UID: "b8a88808-6249-4879-b857-55182475c4a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.407229 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc574140-1aed-42c6-baab-b39625a3ae3b" (UID: "cc574140-1aed-42c6-baab-b39625a3ae3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.407623 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.407637 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.410364 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config" (OuterVolumeSpecName: "config") pod "cc574140-1aed-42c6-baab-b39625a3ae3b" (UID: "cc574140-1aed-42c6-baab-b39625a3ae3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.419319 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh" (OuterVolumeSpecName: "kube-api-access-zlsqh") pod "cc574140-1aed-42c6-baab-b39625a3ae3b" (UID: "cc574140-1aed-42c6-baab-b39625a3ae3b"). InnerVolumeSpecName "kube-api-access-zlsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.419698 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l" (OuterVolumeSpecName: "kube-api-access-z5v8l") pod "b8a88808-6249-4879-b857-55182475c4a5" (UID: "b8a88808-6249-4879-b857-55182475c4a5"). InnerVolumeSpecName "kube-api-access-z5v8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.509457 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.509755 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.509770 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.765845 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.773738 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.815338 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.826689 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:37:13 crc kubenswrapper[4656]: I0128 15:37:13.181739 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a88808-6249-4879-b857-55182475c4a5" path="/var/lib/kubelet/pods/b8a88808-6249-4879-b857-55182475c4a5/volumes" Jan 28 15:37:13 crc kubenswrapper[4656]: I0128 15:37:13.182109 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc574140-1aed-42c6-baab-b39625a3ae3b" path="/var/lib/kubelet/pods/cc574140-1aed-42c6-baab-b39625a3ae3b/volumes" Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.450371 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerStarted","Data":"bc639ec2d6bf1314e43b21b17cf8d76449f8b8bb14d7e24a9cea2ec054f03c67"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.452909 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a46bc21-63f0-461d-b33d-ec98cb059408","Type":"ContainerStarted","Data":"4c09f6240f474151eb5040fa5896be8d322dfa0e215cd538c28ace35d81aa5ea"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.455436 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv" event={"ID":"4815f130-4106-456b-9bcb-b34536d9ddc9","Type":"ContainerStarted","Data":"262f743ac64b4d0404f708a0ba8cb5862c95bc3d9ab3b49e82db849f6219484e"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.456060 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7pppv" Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.458885 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"da949f76-8013-4824-bda9-0656b43920b5","Type":"ContainerStarted","Data":"0d6f9e6dedc080f1d2651b0859f2b139c4f4ae364eb80a6946a76e6cd8ec806b"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.460707 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f25455cb-6f99-4958-b7bd-9fa56e45f6e1","Type":"ContainerStarted","Data":"da52010fd6af69f869c9d92e361a078930f6fd989245b979cc18134154c605b6"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.461559 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.462963 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e41d89b-8943-4aec-9e33-00db569a2ce8","Type":"ContainerStarted","Data":"9b9fc19a1f3e858e5fc427e01bbeef6a977238f5023354a967caf3c887d5de0e"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.466560 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"681fa692-9a54-4d03-a31c-952409143c4f","Type":"ContainerStarted","Data":"b4a8cd4cc689d188c63069773647e15ab4be4770072365bcb118ab34ccd9d894"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.556531 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7pppv" podStartSLOduration=32.29407409 podStartE2EDuration="39.556509854s" podCreationTimestamp="2026-01-28 15:36:41 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.179973714 +0000 UTC m=+1122.688144518" lastFinishedPulling="2026-01-28 15:37:19.442409468 +0000 UTC m=+1129.950580282" observedRunningTime="2026-01-28 15:37:20.555592878 +0000 UTC m=+1131.063763682" watchObservedRunningTime="2026-01-28 15:37:20.556509854 +0000 UTC m=+1131.064680658" Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.576773 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.073660924 podStartE2EDuration="46.576746331s" podCreationTimestamp="2026-01-28 15:36:34 +0000 UTC" firstStartedPulling="2026-01-28 15:36:35.937430477 +0000 UTC m=+1086.445601281" lastFinishedPulling="2026-01-28 15:37:19.440515884 +0000 UTC m=+1129.948686688" observedRunningTime="2026-01-28 15:37:20.575152715 +0000 UTC m=+1131.083323519" watchObservedRunningTime="2026-01-28 15:37:20.576746331 +0000 UTC m=+1131.084917155" Jan 28 15:37:21 crc kubenswrapper[4656]: I0128 15:37:21.484027 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07f26e32-4b43-4591-9ed2-6426a96e596e","Type":"ContainerStarted","Data":"74ff180262c50c4a408406c295f8f1bca87a9e6fc375807df80958cce55bb379"} Jan 28 15:37:21 crc kubenswrapper[4656]: I0128 15:37:21.488717 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2239f1cd-f384-40df-9f71-a46caf290038","Type":"ContainerStarted","Data":"9d149d3029d11945b504fb085462ea962ef4c3fcb25963157c8baca85a61ef3e"} Jan 28 15:37:21 crc kubenswrapper[4656]: I0128 15:37:21.491698 4656 generic.go:334] "Generic (PLEG): container finished" podID="beab0392-2167-4283-97ae-12498c5d02c1" containerID="bc639ec2d6bf1314e43b21b17cf8d76449f8b8bb14d7e24a9cea2ec054f03c67" exitCode=0 Jan 28 15:37:21 crc kubenswrapper[4656]: I0128 15:37:21.491752 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerDied","Data":"bc639ec2d6bf1314e43b21b17cf8d76449f8b8bb14d7e24a9cea2ec054f03c67"} Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.508264 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerStarted","Data":"6b67bfaa7406a14b181d3c9e2aff5e03553a3ebd87b0ce069924851826b27b2a"} Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.508736 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerStarted","Data":"f47e73927eb4df8d70ed8020d0fc10b1e6b7c8af854ef88e73c64b5f45dcef87"} Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.509606 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.509633 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.535054 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-28hwk" podStartSLOduration=33.033598718 podStartE2EDuration="41.535034618s" podCreationTimestamp="2026-01-28 15:36:41 +0000 UTC" firstStartedPulling="2026-01-28 15:37:10.941003109 +0000 UTC m=+1121.449173913" lastFinishedPulling="2026-01-28 15:37:19.442439009 +0000 UTC m=+1129.950609813" observedRunningTime="2026-01-28 15:37:22.532810125 +0000 UTC m=+1133.040980939" watchObservedRunningTime="2026-01-28 15:37:22.535034618 +0000 UTC m=+1133.043205422" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.235058 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vs8p5"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.248640 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.252346 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vs8p5"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.253366 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.256637 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.346962 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-config\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.347464 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.347659 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovn-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.347901 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovs-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.354108 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-combined-ca-bundle\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.354343 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc7jl\" (UniqueName: \"kubernetes.io/projected/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-kube-api-access-tc7jl\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.453118 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457103 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-combined-ca-bundle\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457237 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc7jl\" (UniqueName: \"kubernetes.io/projected/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-kube-api-access-tc7jl\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457312 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-config\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457358 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457404 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovn-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457464 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovs-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457838 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovs-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.458205 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovn-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.459000 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-config\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.469011 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.469569 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-combined-ca-bundle\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.497050 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc7jl\" (UniqueName: \"kubernetes.io/projected/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-kube-api-access-tc7jl\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.523668 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.538613 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.571077 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.571342 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.571620 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.571721 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.579064 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.595899 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.651024 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.695198 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.695281 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.695341 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.695374 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.696457 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.696779 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.697080 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.780329 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.886616 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.976499 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.044673 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.046402 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.050516 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.080935 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108390 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108464 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108504 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108537 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108584 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209542 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209620 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209654 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209691 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209744 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.210742 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.211152 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.211428 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.212057 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.240989 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.374232 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.712090 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.821844 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") pod \"4d610f18-f075-4fdd-9618-c807584a0d12\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.822080 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") pod \"4d610f18-f075-4fdd-9618-c807584a0d12\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.822185 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") pod \"4d610f18-f075-4fdd-9618-c807584a0d12\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.822777 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config" (OuterVolumeSpecName: "config") pod "4d610f18-f075-4fdd-9618-c807584a0d12" (UID: "4d610f18-f075-4fdd-9618-c807584a0d12"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.823822 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d610f18-f075-4fdd-9618-c807584a0d12" (UID: "4d610f18-f075-4fdd-9618-c807584a0d12"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.828825 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc" (OuterVolumeSpecName: "kube-api-access-ztrdc") pod "4d610f18-f075-4fdd-9618-c807584a0d12" (UID: "4d610f18-f075-4fdd-9618-c807584a0d12"). InnerVolumeSpecName "kube-api-access-ztrdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.924147 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.924487 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.924499 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.322219 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vs8p5"] Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.495012 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.506137 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.546041 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vs8p5" event={"ID":"f39f654e-78ca-44c2-8c6a-a1de43a83d3f","Type":"ContainerStarted","Data":"0459d82bf81cb9d98b851ce0dc33b37b4608f574bca7ce64efe31e15395bd08e"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.554303 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"da949f76-8013-4824-bda9-0656b43920b5","Type":"ContainerStarted","Data":"ed16f6956dab3a0e253c0c91804af4966b98ad23b7e35d3059968bdae4d7a2ea"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.559617 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" event={"ID":"4d610f18-f075-4fdd-9618-c807584a0d12","Type":"ContainerDied","Data":"f223adfd127e099f079d9c57cf03da22cee668c8861dac91721c1feb6c571fac"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.559713 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.565389 4656 generic.go:334] "Generic (PLEG): container finished" podID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" containerID="d7b43fdb99a2acae6cf8b5d6e5948315f1bd352f9c1ad850d550c3eb79c16b71" exitCode=0 Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.565464 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" event={"ID":"c1211ebf-bf2b-422a-8bf6-8ff685d27325","Type":"ContainerDied","Data":"d7b43fdb99a2acae6cf8b5d6e5948315f1bd352f9c1ad850d550c3eb79c16b71"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.568000 4656 generic.go:334] "Generic (PLEG): container finished" podID="8e41d89b-8943-4aec-9e33-00db569a2ce8" containerID="9b9fc19a1f3e858e5fc427e01bbeef6a977238f5023354a967caf3c887d5de0e" exitCode=0 Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.568074 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e41d89b-8943-4aec-9e33-00db569a2ce8","Type":"ContainerDied","Data":"9b9fc19a1f3e858e5fc427e01bbeef6a977238f5023354a967caf3c887d5de0e"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.576089 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"681fa692-9a54-4d03-a31c-952409143c4f","Type":"ContainerStarted","Data":"d2f3fa08a429d53e058b1424fcbba9563239b7eaf5932e71b6c86fc0117c4f6c"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.581593 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7eff8e04-7afc-4a92-998f-db692ece65e7","Type":"ContainerStarted","Data":"24a3867adf9a884e876f5084f1beea5148b255d639f393cdbb1fb2dd6f7422fe"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.581988 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.590313 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=30.104906027 podStartE2EDuration="44.59028653s" podCreationTimestamp="2026-01-28 15:36:43 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.363436603 +0000 UTC m=+1122.871607407" lastFinishedPulling="2026-01-28 15:37:26.848817106 +0000 UTC m=+1137.356987910" observedRunningTime="2026-01-28 15:37:27.586445791 +0000 UTC m=+1138.094616595" watchObservedRunningTime="2026-01-28 15:37:27.59028653 +0000 UTC m=+1138.098457334" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.605889 4656 generic.go:334] "Generic (PLEG): container finished" podID="6a46bc21-63f0-461d-b33d-ec98cb059408" containerID="4c09f6240f474151eb5040fa5896be8d322dfa0e215cd538c28ace35d81aa5ea" exitCode=0 Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.605941 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a46bc21-63f0-461d-b33d-ec98cb059408","Type":"ContainerDied","Data":"4c09f6240f474151eb5040fa5896be8d322dfa0e215cd538c28ace35d81aa5ea"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.689459 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.538677125 podStartE2EDuration="50.689427116s" podCreationTimestamp="2026-01-28 15:36:37 +0000 UTC" firstStartedPulling="2026-01-28 15:36:38.864804147 +0000 UTC m=+1089.372974951" lastFinishedPulling="2026-01-28 15:37:27.015554138 +0000 UTC m=+1137.523724942" observedRunningTime="2026-01-28 15:37:27.681975424 +0000 UTC m=+1138.190146228" watchObservedRunningTime="2026-01-28 15:37:27.689427116 +0000 UTC m=+1138.197597940" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.944877 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=33.296595262 podStartE2EDuration="47.944837606s" podCreationTimestamp="2026-01-28 15:36:40 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.198951955 +0000 UTC m=+1122.707122759" lastFinishedPulling="2026-01-28 15:37:26.847194299 +0000 UTC m=+1137.355365103" observedRunningTime="2026-01-28 15:37:27.93444949 +0000 UTC m=+1138.442620314" watchObservedRunningTime="2026-01-28 15:37:27.944837606 +0000 UTC m=+1138.453008420" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.030768 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.062327 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.087468 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.125707 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.128096 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.199242 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203680 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203737 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203825 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203863 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203886 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305654 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305716 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305740 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305782 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305809 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.309688 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.310145 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.310438 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.310731 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.355264 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.433118 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.465979 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.507968 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") pod \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.508938 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") pod \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.509141 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") pod \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.515588 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874" (OuterVolumeSpecName: "kube-api-access-6l874") pod "c1211ebf-bf2b-422a-8bf6-8ff685d27325" (UID: "c1211ebf-bf2b-422a-8bf6-8ff685d27325"). InnerVolumeSpecName "kube-api-access-6l874". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.529196 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1211ebf-bf2b-422a-8bf6-8ff685d27325" (UID: "c1211ebf-bf2b-422a-8bf6-8ff685d27325"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.547656 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config" (OuterVolumeSpecName: "config") pod "c1211ebf-bf2b-422a-8bf6-8ff685d27325" (UID: "c1211ebf-bf2b-422a-8bf6-8ff685d27325"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.613359 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.613378 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.613388 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.632801 4656 generic.go:334] "Generic (PLEG): container finished" podID="ad05b286-d454-4ab3-b003-cdff0f888c5e" containerID="a7cdf92129f7f9358ac4f1204fb73d1bb90fbf838c4a8a5616d619a3b25b6f8e" exitCode=0 Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.632865 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" event={"ID":"ad05b286-d454-4ab3-b003-cdff0f888c5e","Type":"ContainerDied","Data":"a7cdf92129f7f9358ac4f1204fb73d1bb90fbf838c4a8a5616d619a3b25b6f8e"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.632893 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" event={"ID":"ad05b286-d454-4ab3-b003-cdff0f888c5e","Type":"ContainerStarted","Data":"3f9bb71ffdcc1857a0ac29f30f2c89bfc60413b2c902a3d6f5b76c55bc1d2bff"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.635859 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a46bc21-63f0-461d-b33d-ec98cb059408","Type":"ContainerStarted","Data":"4d8d41bfb4a46b7e7fa072303556bee49e741b0f0cc9e550c2f36af9fc590dfb"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.662557 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vs8p5" event={"ID":"f39f654e-78ca-44c2-8c6a-a1de43a83d3f","Type":"ContainerStarted","Data":"86f2f5f423b174a4cc38f83285eed5ba3620dc56800babdf0cdeedb3b809983d"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.673459 4656 generic.go:334] "Generic (PLEG): container finished" podID="cadd7215-631f-469f-9d02-243efd40508a" containerID="0c99690cd99ff76bd306e4eb9caf6e9e98dd2c44cd01b18972ac6c91a1b608d2" exitCode=0 Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.673514 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerDied","Data":"0c99690cd99ff76bd306e4eb9caf6e9e98dd2c44cd01b18972ac6c91a1b608d2"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.673542 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerStarted","Data":"153a0d6ff43f7a95068acd9e82a4dfc8630051312d6191ff39fb5f25db0cc785"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.748324 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" event={"ID":"c1211ebf-bf2b-422a-8bf6-8ff685d27325","Type":"ContainerDied","Data":"63a5615d0933628808006a1d7c767190e6e914053a00c94d355b26e0c1287a0b"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.748943 4656 scope.go:117] "RemoveContainer" containerID="d7b43fdb99a2acae6cf8b5d6e5948315f1bd352f9c1ad850d550c3eb79c16b71" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.749212 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.867982 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e41d89b-8943-4aec-9e33-00db569a2ce8","Type":"ContainerStarted","Data":"9e9c19d2591d020a6ca5e65812bb5f91cf0e7e3141a1ca94d0fe83dd22c65397"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.891896 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.715665373 podStartE2EDuration="55.891864709s" podCreationTimestamp="2026-01-28 15:36:33 +0000 UTC" firstStartedPulling="2026-01-28 15:36:37.431248536 +0000 UTC m=+1087.939419340" lastFinishedPulling="2026-01-28 15:37:19.607447862 +0000 UTC m=+1130.115618676" observedRunningTime="2026-01-28 15:37:28.851260952 +0000 UTC m=+1139.359431756" watchObservedRunningTime="2026-01-28 15:37:28.891864709 +0000 UTC m=+1139.400035513" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.919970 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vs8p5" podStartSLOduration=3.9199444100000003 podStartE2EDuration="3.91994441s" podCreationTimestamp="2026-01-28 15:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:28.897606983 +0000 UTC m=+1139.405777787" watchObservedRunningTime="2026-01-28 15:37:28.91994441 +0000 UTC m=+1139.428115214" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.015437 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.026755 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.051298 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.312657509 podStartE2EDuration="57.051270683s" podCreationTimestamp="2026-01-28 15:36:32 +0000 UTC" firstStartedPulling="2026-01-28 15:36:34.703783594 +0000 UTC m=+1085.211954398" lastFinishedPulling="2026-01-28 15:37:19.442396768 +0000 UTC m=+1129.950567572" observedRunningTime="2026-01-28 15:37:29.046000193 +0000 UTC m=+1139.554170997" watchObservedRunningTime="2026-01-28 15:37:29.051270683 +0000 UTC m=+1139.559441487" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.109153 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.205278 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d610f18-f075-4fdd-9618-c807584a0d12" path="/var/lib/kubelet/pods/4d610f18-f075-4fdd-9618-c807584a0d12/volumes" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.205800 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" path="/var/lib/kubelet/pods/c1211ebf-bf2b-422a-8bf6-8ff685d27325/volumes" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.274115 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.275884 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.275981 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.276541 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.309097 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.311770 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.315201 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.315427 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vbjkk" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.315573 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.315725 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.382746 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.427845 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") pod \"ad05b286-d454-4ab3-b003-cdff0f888c5e\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.427972 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") pod \"ad05b286-d454-4ab3-b003-cdff0f888c5e\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.428866 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") pod \"ad05b286-d454-4ab3-b003-cdff0f888c5e\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.428907 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") pod \"ad05b286-d454-4ab3-b003-cdff0f888c5e\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429136 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwvsh\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-kube-api-access-jwvsh\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429199 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-cache\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429230 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a7b52a-dfe9-47b0-818e-48752d76068e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429265 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-lock\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429320 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429433 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.469558 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4" (OuterVolumeSpecName: "kube-api-access-l5rh4") pod "ad05b286-d454-4ab3-b003-cdff0f888c5e" (UID: "ad05b286-d454-4ab3-b003-cdff0f888c5e"). InnerVolumeSpecName "kube-api-access-l5rh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.472709 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config" (OuterVolumeSpecName: "config") pod "ad05b286-d454-4ab3-b003-cdff0f888c5e" (UID: "ad05b286-d454-4ab3-b003-cdff0f888c5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.505833 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad05b286-d454-4ab3-b003-cdff0f888c5e" (UID: "ad05b286-d454-4ab3-b003-cdff0f888c5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.524368 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad05b286-d454-4ab3-b003-cdff0f888c5e" (UID: "ad05b286-d454-4ab3-b003-cdff0f888c5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531112 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531201 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwvsh\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-kube-api-access-jwvsh\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531236 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-cache\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531273 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a7b52a-dfe9-47b0-818e-48752d76068e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531301 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-lock\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531340 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531381 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531392 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531403 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531411 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531724 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.532860 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-cache\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.533067 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.533126 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.533412 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:30.033190009 +0000 UTC m=+1140.541360913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.540964 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-lock\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.549990 4656 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 28 15:37:29 crc kubenswrapper[4656]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/cadd7215-631f-469f-9d02-243efd40508a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 28 15:37:29 crc kubenswrapper[4656]: > podSandboxID="153a0d6ff43f7a95068acd9e82a4dfc8630051312d6191ff39fb5f25db0cc785" Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.550250 4656 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 15:37:29 crc kubenswrapper[4656]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66n6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-tmnbd_openstack(cadd7215-631f-469f-9d02-243efd40508a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/cadd7215-631f-469f-9d02-243efd40508a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 28 15:37:29 crc kubenswrapper[4656]: > logger="UnhandledError" Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.551784 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/cadd7215-631f-469f-9d02-243efd40508a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" podUID="cadd7215-631f-469f-9d02-243efd40508a" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.557076 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a7b52a-dfe9-47b0-818e-48752d76068e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.567296 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwvsh\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-kube-api-access-jwvsh\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.575039 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.632755 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.632812 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.699208 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.717051 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.764671 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.823844 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mfbzm"] Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.824423 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad05b286-d454-4ab3-b003-cdff0f888c5e" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.824449 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad05b286-d454-4ab3-b003-cdff0f888c5e" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.824691 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad05b286-d454-4ab3-b003-cdff0f888c5e" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.825588 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.835751 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838111 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838153 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838203 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838262 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838283 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838317 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838351 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.840238 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.840266 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.852079 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mfbzm"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.875996 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" event={"ID":"ad05b286-d454-4ab3-b003-cdff0f888c5e","Type":"ContainerDied","Data":"3f9bb71ffdcc1857a0ac29f30f2c89bfc60413b2c902a3d6f5b76c55bc1d2bff"} Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.876283 4656 scope.go:117] "RemoveContainer" containerID="a7cdf92129f7f9358ac4f1204fb73d1bb90fbf838c4a8a5616d619a3b25b6f8e" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.876032 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.877851 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerStarted","Data":"3f3fb240badabe1f5e9c9d62c574ea57eeefcfffb2c7d9f13df634359844b112"} Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.877952 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerStarted","Data":"51515a2b95a452c7a97ce3ad5d48cea215fd18e12f3ce81f0a7c8990597a60e1"} Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.878809 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.939998 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940091 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940362 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940598 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940643 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940790 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940830 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.951629 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.953176 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.953444 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.959476 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.963369 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.966101 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.989638 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.003525 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.033016 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.043352 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:30 crc kubenswrapper[4656]: E0128 15:37:30.046332 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:30 crc kubenswrapper[4656]: E0128 15:37:30.046367 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:30 crc kubenswrapper[4656]: E0128 15:37:30.046413 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:31.046396258 +0000 UTC m=+1141.554567062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.101219 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.145865 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.148866 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.623418 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.627319 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.631372 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.631515 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.631781 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.631782 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5bn98" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.688487 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781104 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mfbzm"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781760 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-config\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781829 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s28s\" (UniqueName: \"kubernetes.io/projected/2fd90425-2113-4787-b18d-332f32cedd87-kube-api-access-5s28s\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781855 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-scripts\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781881 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2fd90425-2113-4787-b18d-332f32cedd87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781898 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781959 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.782025 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.883881 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.883960 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884027 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-config\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884079 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s28s\" (UniqueName: \"kubernetes.io/projected/2fd90425-2113-4787-b18d-332f32cedd87-kube-api-access-5s28s\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884102 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-scripts\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884131 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2fd90425-2113-4787-b18d-332f32cedd87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884175 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884741 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2fd90425-2113-4787-b18d-332f32cedd87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.885231 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-scripts\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.887958 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-config\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.892125 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.893474 4656 generic.go:334] "Generic (PLEG): container finished" podID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerID="3f3fb240badabe1f5e9c9d62c574ea57eeefcfffb2c7d9f13df634359844b112" exitCode=0 Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.893529 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerDied","Data":"3f3fb240badabe1f5e9c9d62c574ea57eeefcfffb2c7d9f13df634359844b112"} Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.901986 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerStarted","Data":"58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6"} Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.903788 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mfbzm" event={"ID":"5db57b48-1e29-4c73-b488-d6998232fce1","Type":"ContainerStarted","Data":"bc88037467be5dfdde8170edc5a85201ae8480da1a1918ca47b82aaa571d9b74"} Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.904966 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.907257 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.909871 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s28s\" (UniqueName: \"kubernetes.io/projected/2fd90425-2113-4787-b18d-332f32cedd87-kube-api-access-5s28s\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.994772 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.087656 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:31 crc kubenswrapper[4656]: E0128 15:37:31.088947 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:31 crc kubenswrapper[4656]: E0128 15:37:31.089050 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:31 crc kubenswrapper[4656]: E0128 15:37:31.089154 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:33.08913172 +0000 UTC m=+1143.597302524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.226850 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad05b286-d454-4ab3-b003-cdff0f888c5e" path="/var/lib/kubelet/pods/ad05b286-d454-4ab3-b003-cdff0f888c5e/volumes" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.521711 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.913683 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerStarted","Data":"88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37"} Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.914382 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.920907 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2fd90425-2113-4787-b18d-332f32cedd87","Type":"ContainerStarted","Data":"552ce66e0bf904b0e7f11b0bdb02820d35a856f16b3a8bbbc9a3fe3e79c54749"} Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.922210 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.935736 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-jpj69" podStartSLOduration=3.935710309 podStartE2EDuration="3.935710309s" podCreationTimestamp="2026-01-28 15:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:31.93222457 +0000 UTC m=+1142.440395404" watchObservedRunningTime="2026-01-28 15:37:31.935710309 +0000 UTC m=+1142.443881113" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.983054 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" podStartSLOduration=6.983033148 podStartE2EDuration="6.983033148s" podCreationTimestamp="2026-01-28 15:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:31.978236671 +0000 UTC m=+1142.486407495" watchObservedRunningTime="2026-01-28 15:37:31.983033148 +0000 UTC m=+1142.491203952" Jan 28 15:37:33 crc kubenswrapper[4656]: I0128 15:37:33.149815 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:33 crc kubenswrapper[4656]: E0128 15:37:33.150094 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:33 crc kubenswrapper[4656]: E0128 15:37:33.150132 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:33 crc kubenswrapper[4656]: E0128 15:37:33.150217 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:37.150197917 +0000 UTC m=+1147.658368721 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:33 crc kubenswrapper[4656]: I0128 15:37:33.553515 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 15:37:33 crc kubenswrapper[4656]: I0128 15:37:33.553815 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 15:37:35 crc kubenswrapper[4656]: I0128 15:37:35.929275 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.026650 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.376369 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.437539 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.437925 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.555600 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.991795 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mfbzm" event={"ID":"5db57b48-1e29-4c73-b488-d6998232fce1","Type":"ContainerStarted","Data":"b93135778ed8920d21b7e918ab759c599172840deada7079b43816b5823801a7"} Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.995660 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2fd90425-2113-4787-b18d-332f32cedd87","Type":"ContainerStarted","Data":"5fce9a43853d7e0ead6a3ea8bbb3d36df9b015633215d8aa2ac6da3170d29ab6"} Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.995732 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2fd90425-2113-4787-b18d-332f32cedd87","Type":"ContainerStarted","Data":"083fc772d4ef7d96bd4029f24a651783b0497dacc5e8bb874aff4b49bc744cc2"} Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.995872 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.018889 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-mfbzm" podStartSLOduration=2.8168953329999997 podStartE2EDuration="8.018867817s" podCreationTimestamp="2026-01-28 15:37:29 +0000 UTC" firstStartedPulling="2026-01-28 15:37:30.799400591 +0000 UTC m=+1141.307571395" lastFinishedPulling="2026-01-28 15:37:36.001373075 +0000 UTC m=+1146.509543879" observedRunningTime="2026-01-28 15:37:37.010627812 +0000 UTC m=+1147.518798616" watchObservedRunningTime="2026-01-28 15:37:37.018867817 +0000 UTC m=+1147.527038621" Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.118430 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.146428 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.675538998 podStartE2EDuration="7.146404412s" podCreationTimestamp="2026-01-28 15:37:30 +0000 UTC" firstStartedPulling="2026-01-28 15:37:31.529396429 +0000 UTC m=+1142.037567243" lastFinishedPulling="2026-01-28 15:37:36.000261833 +0000 UTC m=+1146.508432657" observedRunningTime="2026-01-28 15:37:37.042019457 +0000 UTC m=+1147.550190261" watchObservedRunningTime="2026-01-28 15:37:37.146404412 +0000 UTC m=+1147.654575216" Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.229297 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:37 crc kubenswrapper[4656]: E0128 15:37:37.230730 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:37 crc kubenswrapper[4656]: E0128 15:37:37.230764 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:37 crc kubenswrapper[4656]: E0128 15:37:37.230820 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:45.230795688 +0000 UTC m=+1155.738966562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.857268 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 15:37:38 crc kubenswrapper[4656]: I0128 15:37:38.467907 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:38 crc kubenswrapper[4656]: I0128 15:37:38.555261 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:38 crc kubenswrapper[4656]: I0128 15:37:38.555517 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="dnsmasq-dns" containerID="cri-o://58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6" gracePeriod=10 Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.025802 4656 generic.go:334] "Generic (PLEG): container finished" podID="cadd7215-631f-469f-9d02-243efd40508a" containerID="58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6" exitCode=0 Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.025867 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerDied","Data":"58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6"} Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.026244 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerDied","Data":"153a0d6ff43f7a95068acd9e82a4dfc8630051312d6191ff39fb5f25db0cc785"} Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.026282 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="153a0d6ff43f7a95068acd9e82a4dfc8630051312d6191ff39fb5f25db0cc785" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.071893 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166080 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166274 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166342 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166429 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166455 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.181969 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d" (OuterVolumeSpecName: "kube-api-access-66n6d") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "kube-api-access-66n6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.216096 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.217345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.223233 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.233920 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config" (OuterVolumeSpecName: "config") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269047 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269085 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269100 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269117 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269131 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:40 crc kubenswrapper[4656]: I0128 15:37:40.032230 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:40 crc kubenswrapper[4656]: I0128 15:37:40.096344 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:40 crc kubenswrapper[4656]: I0128 15:37:40.102518 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:41 crc kubenswrapper[4656]: I0128 15:37:41.180927 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cadd7215-631f-469f-9d02-243efd40508a" path="/var/lib/kubelet/pods/cadd7215-631f-469f-9d02-243efd40508a/volumes" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.354364 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:42 crc kubenswrapper[4656]: E0128 15:37:42.355856 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="dnsmasq-dns" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.355979 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="dnsmasq-dns" Jan 28 15:37:42 crc kubenswrapper[4656]: E0128 15:37:42.356129 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="init" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.356231 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="init" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.356595 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="dnsmasq-dns" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.357540 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.360086 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.372526 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.581969 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.582154 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.684800 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.684903 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.686480 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.723324 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.973952 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-44tss" Jan 28 15:37:43 crc kubenswrapper[4656]: I0128 15:37:43.582649 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:44 crc kubenswrapper[4656]: I0128 15:37:44.150905 4656 generic.go:334] "Generic (PLEG): container finished" podID="2e05cea6-abbf-4ab7-b46f-e07960967728" containerID="83b94c30ed93547fd506be79bae97518f5ab107ea9f58e554bb480d642b3aaf6" exitCode=0 Jan 28 15:37:44 crc kubenswrapper[4656]: I0128 15:37:44.151241 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-44tss" event={"ID":"2e05cea6-abbf-4ab7-b46f-e07960967728","Type":"ContainerDied","Data":"83b94c30ed93547fd506be79bae97518f5ab107ea9f58e554bb480d642b3aaf6"} Jan 28 15:37:44 crc kubenswrapper[4656]: I0128 15:37:44.151291 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-44tss" event={"ID":"2e05cea6-abbf-4ab7-b46f-e07960967728","Type":"ContainerStarted","Data":"3d34f085a12e9171d1883ba4de0c28f145fef35fb7524b8837c03d40ee2838ed"} Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.007006 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.008793 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.031372 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.129347 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.129695 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.180591 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.181834 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.190519 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.197713 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.235354 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.238070 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.238291 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.237482 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.238508 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.238627 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:45 crc kubenswrapper[4656]: E0128 15:37:45.238965 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:45 crc kubenswrapper[4656]: E0128 15:37:45.239042 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:45 crc kubenswrapper[4656]: E0128 15:37:45.239155 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:38:01.239133393 +0000 UTC m=+1171.747304197 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.270343 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.328370 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.340402 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.340442 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.341592 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.353323 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.354338 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.399717 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.421222 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.504864 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.527704 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.528943 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.533977 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.558226 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.558607 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.568252 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.661107 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.661248 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.661283 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.661367 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.670833 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.676078 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.681190 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.683677 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.711502 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.763082 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.763122 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.763870 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.765917 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-44tss" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.792527 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.802303 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.853880 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:37:45 crc kubenswrapper[4656]: E0128 15:37:45.854459 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e05cea6-abbf-4ab7-b46f-e07960967728" containerName="mariadb-account-create-update" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.854507 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e05cea6-abbf-4ab7-b46f-e07960967728" containerName="mariadb-account-create-update" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.854733 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e05cea6-abbf-4ab7-b46f-e07960967728" containerName="mariadb-account-create-update" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.855469 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.856558 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.858727 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.865379 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") pod \"2e05cea6-abbf-4ab7-b46f-e07960967728\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.865699 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") pod \"2e05cea6-abbf-4ab7-b46f-e07960967728\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.866628 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.866765 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.867340 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e05cea6-abbf-4ab7-b46f-e07960967728" (UID: "2e05cea6-abbf-4ab7-b46f-e07960967728"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.869990 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.891069 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt" (OuterVolumeSpecName: "kube-api-access-cbcnt") pod "2e05cea6-abbf-4ab7-b46f-e07960967728" (UID: "2e05cea6-abbf-4ab7-b46f-e07960967728"). InnerVolumeSpecName "kube-api-access-cbcnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968568 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968706 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968760 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968792 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968848 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968859 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.969625 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.987770 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.019265 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.071047 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.071314 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.073789 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.092718 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.168789 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gnzml" event={"ID":"dfcf1378-d339-426c-bd01-36cd47172c37","Type":"ContainerStarted","Data":"1c3bb1d889fd4a796f70f3d1e9d53ba7f824ad2cfc48fac384daf6b22af00e87"} Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.170375 4656 generic.go:334] "Generic (PLEG): container finished" podID="5db57b48-1e29-4c73-b488-d6998232fce1" containerID="b93135778ed8920d21b7e918ab759c599172840deada7079b43816b5823801a7" exitCode=0 Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.170455 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mfbzm" event={"ID":"5db57b48-1e29-4c73-b488-d6998232fce1","Type":"ContainerDied","Data":"b93135778ed8920d21b7e918ab759c599172840deada7079b43816b5823801a7"} Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.179397 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-44tss" event={"ID":"2e05cea6-abbf-4ab7-b46f-e07960967728","Type":"ContainerDied","Data":"3d34f085a12e9171d1883ba4de0c28f145fef35fb7524b8837c03d40ee2838ed"} Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.179462 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d34f085a12e9171d1883ba4de0c28f145fef35fb7524b8837c03d40ee2838ed" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.179563 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-44tss" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.184665 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.263020 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:37:46 crc kubenswrapper[4656]: W0128 15:37:46.282401 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod099dbe89_7289_453c_84a2_f0de86b792cf.slice/crio-92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89 WatchSource:0}: Error finding container 92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89: Status 404 returned error can't find the container with id 92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89 Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.303028 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2q82c" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.399194 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:37:46 crc kubenswrapper[4656]: W0128 15:37:46.409749 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4037dd9_6fe0_4f3a_9fca_4e716126a317.slice/crio-33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486 WatchSource:0}: Error finding container 33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486: Status 404 returned error can't find the container with id 33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486 Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.525963 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.766285 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:37:46 crc kubenswrapper[4656]: W0128 15:37:46.769865 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69e7d30a_cf9c_4aa5_880f_e214b8694082.slice/crio-55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480 WatchSource:0}: Error finding container 55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480: Status 404 returned error can't find the container with id 55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480 Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.948377 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:37:46 crc kubenswrapper[4656]: W0128 15:37:46.967070 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32493d3d_ca02_451a_b1b0_51d4f82d54f3.slice/crio-0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9 WatchSource:0}: Error finding container 0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9: Status 404 returned error can't find the container with id 0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.194579 4656 generic.go:334] "Generic (PLEG): container finished" podID="69e7d30a-cf9c-4aa5-880f-e214b8694082" containerID="2b87d65405f263ac45d36349abb81b3c7c7ec205bf002789539910deb242d968" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.194668 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91ca-account-create-update-lzb9w" event={"ID":"69e7d30a-cf9c-4aa5-880f-e214b8694082","Type":"ContainerDied","Data":"2b87d65405f263ac45d36349abb81b3c7c7ec205bf002789539910deb242d968"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.194703 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91ca-account-create-update-lzb9w" event={"ID":"69e7d30a-cf9c-4aa5-880f-e214b8694082","Type":"ContainerStarted","Data":"55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.197297 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" containerID="6fe56555d0ff628c998638556e5ceea7a70562af71008d6c468acec1f6303046" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.197352 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b64sc" event={"ID":"e4037dd9-6fe0-4f3a-9fca-4e716126a317","Type":"ContainerDied","Data":"6fe56555d0ff628c998638556e5ceea7a70562af71008d6c468acec1f6303046"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.197373 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b64sc" event={"ID":"e4037dd9-6fe0-4f3a-9fca-4e716126a317","Type":"ContainerStarted","Data":"33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.201688 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2q82c" event={"ID":"32493d3d-ca02-451a-b1b0-51d4f82d54f3","Type":"ContainerStarted","Data":"3dde6ae74f513f5fb2d842f4ed02820c2b2f634f4c0cbc22132cce071c91ef58"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.201734 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2q82c" event={"ID":"32493d3d-ca02-451a-b1b0-51d4f82d54f3","Type":"ContainerStarted","Data":"0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.209577 4656 generic.go:334] "Generic (PLEG): container finished" podID="dfcf1378-d339-426c-bd01-36cd47172c37" containerID="54efb2e0c946cd8bb2a3f18919e54300b18699ccd32fa90c9ddedf9982d9a734" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.209643 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gnzml" event={"ID":"dfcf1378-d339-426c-bd01-36cd47172c37","Type":"ContainerDied","Data":"54efb2e0c946cd8bb2a3f18919e54300b18699ccd32fa90c9ddedf9982d9a734"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.213669 4656 generic.go:334] "Generic (PLEG): container finished" podID="c4a6af64-b874-4449-ae51-8902df8e9bdf" containerID="2365be7a0a671764daf45f822757fffcf6c88cb4fb34815a2047bee431512215" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.213755 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3d7e-account-create-update-6v56m" event={"ID":"c4a6af64-b874-4449-ae51-8902df8e9bdf","Type":"ContainerDied","Data":"2365be7a0a671764daf45f822757fffcf6c88cb4fb34815a2047bee431512215"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.213805 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3d7e-account-create-update-6v56m" event={"ID":"c4a6af64-b874-4449-ae51-8902df8e9bdf","Type":"ContainerStarted","Data":"6e890274229c2c612d49becf0adfbd07b308538b46af00a2122eafff404bd9ac"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.217734 4656 generic.go:334] "Generic (PLEG): container finished" podID="099dbe89-7289-453c-84a2-f0de86b792cf" containerID="ae2e87be8e9439b8c799f033078d2b5429004fda019d053d0cb63bfa319ab598" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.217778 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-93fe-account-create-update-r2snq" event={"ID":"099dbe89-7289-453c-84a2-f0de86b792cf","Type":"ContainerDied","Data":"ae2e87be8e9439b8c799f033078d2b5429004fda019d053d0cb63bfa319ab598"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.217836 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-93fe-account-create-update-r2snq" event={"ID":"099dbe89-7289-453c-84a2-f0de86b792cf","Type":"ContainerStarted","Data":"92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.290797 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-2q82c" podStartSLOduration=2.290770631 podStartE2EDuration="2.290770631s" podCreationTimestamp="2026-01-28 15:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:47.282249278 +0000 UTC m=+1157.790420082" watchObservedRunningTime="2026-01-28 15:37:47.290770631 +0000 UTC m=+1157.798941445" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.610845 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719312 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719370 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719436 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719463 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719491 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719541 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719684 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.722902 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.723993 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.726883 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh" (OuterVolumeSpecName: "kube-api-access-6hgrh") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "kube-api-access-6hgrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.729257 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.746197 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts" (OuterVolumeSpecName: "scripts") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.748050 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.752355 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821645 4656 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821681 4656 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821691 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821700 4656 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821708 4656 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821719 4656 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821727 4656 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.227361 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mfbzm" event={"ID":"5db57b48-1e29-4c73-b488-d6998232fce1","Type":"ContainerDied","Data":"bc88037467be5dfdde8170edc5a85201ae8480da1a1918ca47b82aaa571d9b74"} Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.227403 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc88037467be5dfdde8170edc5a85201ae8480da1a1918ca47b82aaa571d9b74" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.227483 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.229074 4656 generic.go:334] "Generic (PLEG): container finished" podID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" containerID="3dde6ae74f513f5fb2d842f4ed02820c2b2f634f4c0cbc22132cce071c91ef58" exitCode=0 Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.229546 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2q82c" event={"ID":"32493d3d-ca02-451a-b1b0-51d4f82d54f3","Type":"ContainerDied","Data":"3dde6ae74f513f5fb2d842f4ed02820c2b2f634f4c0cbc22132cce071c91ef58"} Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.574091 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.581026 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.752813 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.844782 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") pod \"69e7d30a-cf9c-4aa5-880f-e214b8694082\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.844958 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") pod \"69e7d30a-cf9c-4aa5-880f-e214b8694082\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.845930 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69e7d30a-cf9c-4aa5-880f-e214b8694082" (UID: "69e7d30a-cf9c-4aa5-880f-e214b8694082"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.857537 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c" (OuterVolumeSpecName: "kube-api-access-dkg2c") pod "69e7d30a-cf9c-4aa5-880f-e214b8694082" (UID: "69e7d30a-cf9c-4aa5-880f-e214b8694082"). InnerVolumeSpecName "kube-api-access-dkg2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.932436 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b64sc" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.945214 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.948907 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.948962 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.967856 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.978112 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049575 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") pod \"099dbe89-7289-453c-84a2-f0de86b792cf\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049654 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") pod \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049687 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") pod \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049707 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") pod \"099dbe89-7289-453c-84a2-f0de86b792cf\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049756 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") pod \"c4a6af64-b874-4449-ae51-8902df8e9bdf\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049811 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") pod \"c4a6af64-b874-4449-ae51-8902df8e9bdf\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049853 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") pod \"dfcf1378-d339-426c-bd01-36cd47172c37\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049968 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") pod \"dfcf1378-d339-426c-bd01-36cd47172c37\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.051595 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "099dbe89-7289-453c-84a2-f0de86b792cf" (UID: "099dbe89-7289-453c-84a2-f0de86b792cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.051643 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4a6af64-b874-4449-ae51-8902df8e9bdf" (UID: "c4a6af64-b874-4449-ae51-8902df8e9bdf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.052026 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfcf1378-d339-426c-bd01-36cd47172c37" (UID: "dfcf1378-d339-426c-bd01-36cd47172c37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.052431 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4037dd9-6fe0-4f3a-9fca-4e716126a317" (UID: "e4037dd9-6fe0-4f3a-9fca-4e716126a317"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.055406 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp" (OuterVolumeSpecName: "kube-api-access-ch2rp") pod "099dbe89-7289-453c-84a2-f0de86b792cf" (UID: "099dbe89-7289-453c-84a2-f0de86b792cf"). InnerVolumeSpecName "kube-api-access-ch2rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.055784 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7" (OuterVolumeSpecName: "kube-api-access-787d7") pod "e4037dd9-6fe0-4f3a-9fca-4e716126a317" (UID: "e4037dd9-6fe0-4f3a-9fca-4e716126a317"). InnerVolumeSpecName "kube-api-access-787d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.055965 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4" (OuterVolumeSpecName: "kube-api-access-j7fx4") pod "dfcf1378-d339-426c-bd01-36cd47172c37" (UID: "dfcf1378-d339-426c-bd01-36cd47172c37"). InnerVolumeSpecName "kube-api-access-j7fx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.058175 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj" (OuterVolumeSpecName: "kube-api-access-tk9sj") pod "c4a6af64-b874-4449-ae51-8902df8e9bdf" (UID: "c4a6af64-b874-4449-ae51-8902df8e9bdf"). InnerVolumeSpecName "kube-api-access-tk9sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152471 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152519 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152528 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152537 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152546 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152554 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152563 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152592 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.183720 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e05cea6-abbf-4ab7-b46f-e07960967728" path="/var/lib/kubelet/pods/2e05cea6-abbf-4ab7-b46f-e07960967728/volumes" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.238979 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-93fe-account-create-update-r2snq" event={"ID":"099dbe89-7289-453c-84a2-f0de86b792cf","Type":"ContainerDied","Data":"92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.239027 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.239032 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.240791 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.241335 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91ca-account-create-update-lzb9w" event={"ID":"69e7d30a-cf9c-4aa5-880f-e214b8694082","Type":"ContainerDied","Data":"55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.241365 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.259669 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b64sc" event={"ID":"e4037dd9-6fe0-4f3a-9fca-4e716126a317","Type":"ContainerDied","Data":"33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.259727 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.259778 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b64sc" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.271543 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gnzml" event={"ID":"dfcf1378-d339-426c-bd01-36cd47172c37","Type":"ContainerDied","Data":"1c3bb1d889fd4a796f70f3d1e9d53ba7f824ad2cfc48fac384daf6b22af00e87"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.271678 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c3bb1d889fd4a796f70f3d1e9d53ba7f824ad2cfc48fac384daf6b22af00e87" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.271768 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.274946 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.275663 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3d7e-account-create-update-6v56m" event={"ID":"c4a6af64-b874-4449-ae51-8902df8e9bdf","Type":"ContainerDied","Data":"6e890274229c2c612d49becf0adfbd07b308538b46af00a2122eafff404bd9ac"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.275692 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e890274229c2c612d49becf0adfbd07b308538b46af00a2122eafff404bd9ac" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.700633 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2q82c" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.763754 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") pod \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.763875 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") pod \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.764714 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32493d3d-ca02-451a-b1b0-51d4f82d54f3" (UID: "32493d3d-ca02-451a-b1b0-51d4f82d54f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.766953 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z" (OuterVolumeSpecName: "kube-api-access-jmf6z") pod "32493d3d-ca02-451a-b1b0-51d4f82d54f3" (UID: "32493d3d-ca02-451a-b1b0-51d4f82d54f3"). InnerVolumeSpecName "kube-api-access-jmf6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.866015 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.866316 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:50 crc kubenswrapper[4656]: I0128 15:37:50.282463 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2q82c" event={"ID":"32493d3d-ca02-451a-b1b0-51d4f82d54f3","Type":"ContainerDied","Data":"0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9"} Jan 28 15:37:50 crc kubenswrapper[4656]: I0128 15:37:50.282499 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2q82c" Jan 28 15:37:50 crc kubenswrapper[4656]: I0128 15:37:50.282528 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.086849 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087216 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db57b48-1e29-4c73-b488-d6998232fce1" containerName="swift-ring-rebalance" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087230 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db57b48-1e29-4c73-b488-d6998232fce1" containerName="swift-ring-rebalance" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087246 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087252 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087267 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a6af64-b874-4449-ae51-8902df8e9bdf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087273 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a6af64-b874-4449-ae51-8902df8e9bdf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087288 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087294 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087300 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099dbe89-7289-453c-84a2-f0de86b792cf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087306 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="099dbe89-7289-453c-84a2-f0de86b792cf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087321 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcf1378-d339-426c-bd01-36cd47172c37" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087327 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcf1378-d339-426c-bd01-36cd47172c37" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087337 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e7d30a-cf9c-4aa5-880f-e214b8694082" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087343 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e7d30a-cf9c-4aa5-880f-e214b8694082" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087495 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087506 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e7d30a-cf9c-4aa5-880f-e214b8694082" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087514 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087523 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db57b48-1e29-4c73-b488-d6998232fce1" containerName="swift-ring-rebalance" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087534 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a6af64-b874-4449-ae51-8902df8e9bdf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087541 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcf1378-d339-426c-bd01-36cd47172c37" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087558 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="099dbe89-7289-453c-84a2-f0de86b792cf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.088093 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.091042 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.091581 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vkdm4" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.103951 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.105392 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.192072 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.192187 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.192222 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.192347 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.293838 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.293943 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.294005 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.294040 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.302931 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.304194 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.304376 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.317931 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.403158 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.771741 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7pppv" podUID="4815f130-4106-456b-9bcb-b34536d9ddc9" containerName="ovn-controller" probeResult="failure" output=< Jan 28 15:37:51 crc kubenswrapper[4656]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 15:37:51 crc kubenswrapper[4656]: > Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.788425 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.949897 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:37:52 crc kubenswrapper[4656]: I0128 15:37:52.297492 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-842wg" event={"ID":"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c","Type":"ContainerStarted","Data":"504211dfa97dc92df86e8b70c17397351feacd00a97b729f5ccfa3e6d0b19223"} Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.309927 4656 generic.go:334] "Generic (PLEG): container finished" podID="07f26e32-4b43-4591-9ed2-6426a96e596e" containerID="74ff180262c50c4a408406c295f8f1bca87a9e6fc375807df80958cce55bb379" exitCode=0 Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.311366 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07f26e32-4b43-4591-9ed2-6426a96e596e","Type":"ContainerDied","Data":"74ff180262c50c4a408406c295f8f1bca87a9e6fc375807df80958cce55bb379"} Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.320601 4656 generic.go:334] "Generic (PLEG): container finished" podID="2239f1cd-f384-40df-9f71-a46caf290038" containerID="9d149d3029d11945b504fb085462ea962ef4c3fcb25963157c8baca85a61ef3e" exitCode=0 Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.320869 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2239f1cd-f384-40df-9f71-a46caf290038","Type":"ContainerDied","Data":"9d149d3029d11945b504fb085462ea962ef4c3fcb25963157c8baca85a61ef3e"} Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.631102 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.634425 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.653908 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.700547 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.750710 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.750894 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.852494 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.852600 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.853581 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.876949 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.989716 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.331043 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07f26e32-4b43-4591-9ed2-6426a96e596e","Type":"ContainerStarted","Data":"eedc09f76fba9795c3755ad8270b6016ad3c1ad57fdd2985408c67e2cb4c8c21"} Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.331643 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.335559 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2239f1cd-f384-40df-9f71-a46caf290038","Type":"ContainerStarted","Data":"ef19d1fbea8f6cbc01d46ca96ef38ad5556579895d3feb64e4400c248ada14cf"} Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.336202 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.377022 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.677132747 podStartE2EDuration="1m24.376995823s" podCreationTimestamp="2026-01-28 15:36:30 +0000 UTC" firstStartedPulling="2026-01-28 15:36:32.85227531 +0000 UTC m=+1083.360446114" lastFinishedPulling="2026-01-28 15:37:19.552138386 +0000 UTC m=+1130.060309190" observedRunningTime="2026-01-28 15:37:54.359254297 +0000 UTC m=+1164.867425121" watchObservedRunningTime="2026-01-28 15:37:54.376995823 +0000 UTC m=+1164.885166627" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.400109 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.930224881 podStartE2EDuration="1m24.400083871s" podCreationTimestamp="2026-01-28 15:36:30 +0000 UTC" firstStartedPulling="2026-01-28 15:36:33.084599262 +0000 UTC m=+1083.592770066" lastFinishedPulling="2026-01-28 15:37:19.554458252 +0000 UTC m=+1130.062629056" observedRunningTime="2026-01-28 15:37:54.389316724 +0000 UTC m=+1164.897487558" watchObservedRunningTime="2026-01-28 15:37:54.400083871 +0000 UTC m=+1164.908254675" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.493997 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:37:54 crc kubenswrapper[4656]: W0128 15:37:54.506674 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1df812f9_220d_45f7_aa2a_c26196ef62e5.slice/crio-6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d WatchSource:0}: Error finding container 6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d: Status 404 returned error can't find the container with id 6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d Jan 28 15:37:55 crc kubenswrapper[4656]: I0128 15:37:55.345209 4656 generic.go:334] "Generic (PLEG): container finished" podID="1df812f9-220d-45f7-aa2a-c26196ef62e5" containerID="5f4d77dc94bab7589c74c06da6f776ffa81bcf97ec1f8a2199e44a457c390fb6" exitCode=0 Jan 28 15:37:55 crc kubenswrapper[4656]: I0128 15:37:55.345273 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-slw4v" event={"ID":"1df812f9-220d-45f7-aa2a-c26196ef62e5","Type":"ContainerDied","Data":"5f4d77dc94bab7589c74c06da6f776ffa81bcf97ec1f8a2199e44a457c390fb6"} Jan 28 15:37:55 crc kubenswrapper[4656]: I0128 15:37:55.345703 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-slw4v" event={"ID":"1df812f9-220d-45f7-aa2a-c26196ef62e5","Type":"ContainerStarted","Data":"6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d"} Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.875731 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.885335 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7pppv" podUID="4815f130-4106-456b-9bcb-b34536d9ddc9" containerName="ovn-controller" probeResult="failure" output=< Jan 28 15:37:56 crc kubenswrapper[4656]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 15:37:56 crc kubenswrapper[4656]: > Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.889088 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.914821 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") pod \"1df812f9-220d-45f7-aa2a-c26196ef62e5\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.914898 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") pod \"1df812f9-220d-45f7-aa2a-c26196ef62e5\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.918519 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1df812f9-220d-45f7-aa2a-c26196ef62e5" (UID: "1df812f9-220d-45f7-aa2a-c26196ef62e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.923594 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9" (OuterVolumeSpecName: "kube-api-access-c4bf9") pod "1df812f9-220d-45f7-aa2a-c26196ef62e5" (UID: "1df812f9-220d-45f7-aa2a-c26196ef62e5"). InnerVolumeSpecName "kube-api-access-c4bf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.017299 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.017364 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.135477 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:37:57 crc kubenswrapper[4656]: E0128 15:37:57.135946 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df812f9-220d-45f7-aa2a-c26196ef62e5" containerName="mariadb-account-create-update" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.135968 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df812f9-220d-45f7-aa2a-c26196ef62e5" containerName="mariadb-account-create-update" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.136195 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df812f9-220d-45f7-aa2a-c26196ef62e5" containerName="mariadb-account-create-update" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.136868 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.148770 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.165626 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.233502 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.234613 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.234886 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.234912 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.235118 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.235346 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336661 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336736 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336768 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336814 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336846 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336867 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.337382 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.339676 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.340126 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.340376 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.340583 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.367094 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-slw4v" event={"ID":"1df812f9-220d-45f7-aa2a-c26196ef62e5","Type":"ContainerDied","Data":"6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d"} Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.367136 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.367231 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.377388 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.454772 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.841730 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:37:58 crc kubenswrapper[4656]: I0128 15:37:58.375346 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv-config-tdl92" event={"ID":"e622e2da-0115-43f6-bfe9-e3611becf555","Type":"ContainerStarted","Data":"4aa31b8f5619732fb1055bbe4acfbf9c0754a2dcbf2dcdb1cd6e163c262a3dc3"} Jan 28 15:37:59 crc kubenswrapper[4656]: I0128 15:37:59.383712 4656 generic.go:334] "Generic (PLEG): container finished" podID="e622e2da-0115-43f6-bfe9-e3611becf555" containerID="bdbd8b9752b666c133af6189feff3118817c7063c0dd053bb54b7a6f4c3b19d7" exitCode=0 Jan 28 15:37:59 crc kubenswrapper[4656]: I0128 15:37:59.383815 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv-config-tdl92" event={"ID":"e622e2da-0115-43f6-bfe9-e3611becf555","Type":"ContainerDied","Data":"bdbd8b9752b666c133af6189feff3118817c7063c0dd053bb54b7a6f4c3b19d7"} Jan 28 15:38:01 crc kubenswrapper[4656]: I0128 15:38:01.315333 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:38:01 crc kubenswrapper[4656]: I0128 15:38:01.329508 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:38:01 crc kubenswrapper[4656]: I0128 15:38:01.477081 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 15:38:01 crc kubenswrapper[4656]: I0128 15:38:01.777562 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7pppv" Jan 28 15:38:08 crc kubenswrapper[4656]: E0128 15:38:08.653268 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 28 15:38:08 crc kubenswrapper[4656]: E0128 15:38:08.654056 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6d9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-842wg_openstack(72cfa9c1-01ab-4c7e-80fa-f99e63b2602c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:38:08 crc kubenswrapper[4656]: E0128 15:38:08.655499 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-842wg" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.721963 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.920731 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921343 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921379 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run" (OuterVolumeSpecName: "var-run") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921398 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921482 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921550 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921582 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921879 4656 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921925 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.922100 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.922198 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.923209 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts" (OuterVolumeSpecName: "scripts") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.929667 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7" (OuterVolumeSpecName: "kube-api-access-8ttw7") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "kube-api-access-8ttw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024367 4656 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024408 4656 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024421 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024435 4656 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024446 4656 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.420384 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.509904 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.510438 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv-config-tdl92" event={"ID":"e622e2da-0115-43f6-bfe9-e3611becf555","Type":"ContainerDied","Data":"4aa31b8f5619732fb1055bbe4acfbf9c0754a2dcbf2dcdb1cd6e163c262a3dc3"} Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.510485 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa31b8f5619732fb1055bbe4acfbf9c0754a2dcbf2dcdb1cd6e163c262a3dc3" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.512808 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"64268af379d7f3c0e056d35cf34f8787a2f4ad9991806eb3fc8abaa1994d99ba"} Jan 28 15:38:09 crc kubenswrapper[4656]: E0128 15:38:09.514954 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-842wg" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.871391 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.880215 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.188496 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e622e2da-0115-43f6-bfe9-e3611becf555" path="/var/lib/kubelet/pods/e622e2da-0115-43f6-bfe9-e3611becf555/volumes" Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.263732 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.263811 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.533757 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"192573c372aa8363c99db92dcea056662b0a52a0e76c9fcbb7561765dbe3a3ec"} Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.533814 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"6fdb704054c689a55fbae3248476332c7d200fc74fe7d027abe987373aa0b7ee"} Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.533837 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"0dc563523a3b05da09673142d0bcef433d0ed61d24e722cc9002f95d87bf2ca2"} Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.533846 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"84aa59c7985abd9f8596a471a0eedba27ee3d39d366f840ae4b611576b614d08"} Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.958351 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:38:12 crc kubenswrapper[4656]: I0128 15:38:12.398376 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 15:38:13 crc kubenswrapper[4656]: I0128 15:38:13.565510 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"7df3ab25926ee1fd18f4c68b8f14da2c9abaced628c0abf4130c4058e472fb19"} Jan 28 15:38:13 crc kubenswrapper[4656]: I0128 15:38:13.566767 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"f802c419dee768a52cb8dc9d16fa88ca018ac59b8bc0a8bee1cc28d556402451"} Jan 28 15:38:13 crc kubenswrapper[4656]: I0128 15:38:13.566847 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"9fe2e05ce17cdad3e4d859f62c568981c7fa711d2bdbf7ff63766b1160c5c8fd"} Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.558731 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:38:14 crc kubenswrapper[4656]: E0128 15:38:14.559237 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e622e2da-0115-43f6-bfe9-e3611becf555" containerName="ovn-config" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.559259 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e622e2da-0115-43f6-bfe9-e3611becf555" containerName="ovn-config" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.559504 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e622e2da-0115-43f6-bfe9-e3611becf555" containerName="ovn-config" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.560136 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.617795 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.630073 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"5a0e6ef9b93913865f6a47f446281f75ce1121e19515c1259df35c0242397cb6"} Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.672775 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.674053 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.698105 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.724839 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.724967 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.725041 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.725088 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.783349 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.785098 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.788865 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.807800 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826133 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826226 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826293 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826333 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826377 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826460 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.827480 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.827479 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.867813 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.884511 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.885225 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.911023 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.912311 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.928093 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.928158 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.928213 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.928255 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.929460 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.953064 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.955362 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.963490 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.965624 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.007957 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.027289 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.030551 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.030619 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.030643 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.030666 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.031432 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.057435 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.068271 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.117115 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.132267 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.133455 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.133964 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.173435 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.203850 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.205415 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.219131 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.236955 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.338085 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.338202 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.357695 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.413514 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.449382 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.449617 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.460066 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.486864 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.554059 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.569217 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.645192 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dvx6" event={"ID":"9aed3d60-8ff4-4b82-9bf8-7892dff01cff","Type":"ContainerStarted","Data":"d76b77f287ea9dec0ef0421a43060b12341366ebba74f0e217fcb0bea0847d57"} Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.894888 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.021569 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.084938 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.282957 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.367997 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:38:16 crc kubenswrapper[4656]: W0128 15:38:16.465329 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf851bf_a272_4c0a_99a1_97c464d23a0d.slice/crio-e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc WatchSource:0}: Error finding container e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc: Status 404 returned error can't find the container with id e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc Jan 28 15:38:16 crc kubenswrapper[4656]: W0128 15:38:16.513029 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2455ddf8_bd67_4fe1_821e_0feda40d7da9.slice/crio-448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8 WatchSource:0}: Error finding container 448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8: Status 404 returned error can't find the container with id 448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8 Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.683520 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a9b7-account-create-update-sf97j" event={"ID":"1dc7809e-fb0b-4e26-ad3b-45aceb483265","Type":"ContainerStarted","Data":"49a1c134769a44dcbd496872d4d2a6b24f18b38e3711f4fc9046bd67e56974d5"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.686360 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3131-account-create-update-6kp5h" event={"ID":"cf48bf2a-2b8d-41ac-a712-9218091e8352","Type":"ContainerStarted","Data":"af8fdcd790caf1f39c9294b7636081ecd279be7604886dcdfc416f7645a5171e"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.687506 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdvd7" event={"ID":"2455ddf8-bd67-4fe1-821e-0feda40d7da9","Type":"ContainerStarted","Data":"448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.692853 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cc7jv" event={"ID":"ccf851bf-a272-4c0a-99a1-97c464d23a0d","Type":"ContainerStarted","Data":"e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.697422 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dvx6" event={"ID":"9aed3d60-8ff4-4b82-9bf8-7892dff01cff","Type":"ContainerStarted","Data":"00af603897747403f2999c8dd0ea82db99373770b631a4b1c83fa47765ccde4d"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.702598 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c7a-account-create-update-s5pbw" event={"ID":"e4d3108c-ea49-46b7-896a-6303b5651abc","Type":"ContainerStarted","Data":"37eaf6f481fc9e5652aa133ddddaacb0e32c7be580e917decf70aee486af0ae2"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.715634 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-4dvx6" podStartSLOduration=2.715611071 podStartE2EDuration="2.715611071s" podCreationTimestamp="2026-01-28 15:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:16.710187792 +0000 UTC m=+1187.218358596" watchObservedRunningTime="2026-01-28 15:38:16.715611071 +0000 UTC m=+1187.223781875" Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.721011 4656 generic.go:334] "Generic (PLEG): container finished" podID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" containerID="00af603897747403f2999c8dd0ea82db99373770b631a4b1c83fa47765ccde4d" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.721450 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dvx6" event={"ID":"9aed3d60-8ff4-4b82-9bf8-7892dff01cff","Type":"ContainerDied","Data":"00af603897747403f2999c8dd0ea82db99373770b631a4b1c83fa47765ccde4d"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.729814 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4d3108c-ea49-46b7-896a-6303b5651abc" containerID="02f441349595b2fea556f764337befa680c75f21f42833da737a52fe0e77200f" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.729893 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c7a-account-create-update-s5pbw" event={"ID":"e4d3108c-ea49-46b7-896a-6303b5651abc","Type":"ContainerDied","Data":"02f441349595b2fea556f764337befa680c75f21f42833da737a52fe0e77200f"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.743131 4656 generic.go:334] "Generic (PLEG): container finished" podID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" containerID="266a34fa5d7bf2b9b096e72f7c9b6721bd869c8c77436d0e440426fb23a20543" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.743368 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a9b7-account-create-update-sf97j" event={"ID":"1dc7809e-fb0b-4e26-ad3b-45aceb483265","Type":"ContainerDied","Data":"266a34fa5d7bf2b9b096e72f7c9b6721bd869c8c77436d0e440426fb23a20543"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.749791 4656 generic.go:334] "Generic (PLEG): container finished" podID="cf48bf2a-2b8d-41ac-a712-9218091e8352" containerID="8bb388635794bed8d36ed9b59e0ebe5aa23722a91994b61ca6e7ca5038083d1d" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.749924 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3131-account-create-update-6kp5h" event={"ID":"cf48bf2a-2b8d-41ac-a712-9218091e8352","Type":"ContainerDied","Data":"8bb388635794bed8d36ed9b59e0ebe5aa23722a91994b61ca6e7ca5038083d1d"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.765709 4656 generic.go:334] "Generic (PLEG): container finished" podID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" containerID="810dbe4d5afc6a6d7cc6184ae641765eef2d6efff2d1a416b9a80f9cc06da73c" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.765833 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdvd7" event={"ID":"2455ddf8-bd67-4fe1-821e-0feda40d7da9","Type":"ContainerDied","Data":"810dbe4d5afc6a6d7cc6184ae641765eef2d6efff2d1a416b9a80f9cc06da73c"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.786668 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"5d03e1f80bfb590b2e47db4f68a32fcc2fd42b2e3a7e3e18ad3f5f155bc2ceb8"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.786717 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"86e5928c7021ee3160169a65c05980429646128df05a0fbdc7fae6ebb6902c4e"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.786731 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"1ccf60d8a00f0fe980d3dbe53bea218bfc7b661bca85b6afb67d3b12e5964d6b"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.788432 4656 generic.go:334] "Generic (PLEG): container finished" podID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" containerID="d0d1b804843eff8d434244a805f92f46f9f0772ead6e44bf1c0d417856331fa2" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.788466 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cc7jv" event={"ID":"ccf851bf-a272-4c0a-99a1-97c464d23a0d","Type":"ContainerDied","Data":"d0d1b804843eff8d434244a805f92f46f9f0772ead6e44bf1c0d417856331fa2"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.806097 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"51994d9c3feae3a7b614810d1e45291d749558636dc7d444ab98eae34b64f5d3"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.806467 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"7ba9f3532dc8869112da9f673ed36967c33a0598037780a6378c01917e816d53"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.806485 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"83919565bd6124f8fbe0ee8f5b1f822feb504fb6e3e76efaa6e313cfd8bfdd81"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.806496 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"019b74f0ac49f4c42132100b1d22e149789a25facc2afc452c3afc5fa3e95e4b"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.861690 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.65019853 podStartE2EDuration="50.86166428s" podCreationTimestamp="2026-01-28 15:37:28 +0000 UTC" firstStartedPulling="2026-01-28 15:38:09.444908631 +0000 UTC m=+1179.953079435" lastFinishedPulling="2026-01-28 15:38:16.656374371 +0000 UTC m=+1187.164545185" observedRunningTime="2026-01-28 15:38:18.852783217 +0000 UTC m=+1189.360954031" watchObservedRunningTime="2026-01-28 15:38:18.86166428 +0000 UTC m=+1189.369835084" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.266291 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.267894 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.273145 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.273482 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.304871 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317377 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317467 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317492 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317527 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317549 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317616 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.418558 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") pod \"cf48bf2a-2b8d-41ac-a712-9218091e8352\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.418671 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") pod \"cf48bf2a-2b8d-41ac-a712-9218091e8352\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419034 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419068 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419088 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419103 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419121 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419141 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419960 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.420310 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf48bf2a-2b8d-41ac-a712-9218091e8352" (UID: "cf48bf2a-2b8d-41ac-a712-9218091e8352"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.422112 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.423373 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.424069 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.424707 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.440649 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc" (OuterVolumeSpecName: "kube-api-access-q4jbc") pod "cf48bf2a-2b8d-41ac-a712-9218091e8352" (UID: "cf48bf2a-2b8d-41ac-a712-9218091e8352"). InnerVolumeSpecName "kube-api-access-q4jbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.453930 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.478566 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.489873 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.511351 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.515808 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.519956 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") pod \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520026 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") pod \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520102 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") pod \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520157 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") pod \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520505 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520521 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.521470 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccf851bf-a272-4c0a-99a1-97c464d23a0d" (UID: "ccf851bf-a272-4c0a-99a1-97c464d23a0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.521539 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.523610 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv" (OuterVolumeSpecName: "kube-api-access-wxcbv") pod "ccf851bf-a272-4c0a-99a1-97c464d23a0d" (UID: "ccf851bf-a272-4c0a-99a1-97c464d23a0d"). InnerVolumeSpecName "kube-api-access-wxcbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.523672 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1dc7809e-fb0b-4e26-ad3b-45aceb483265" (UID: "1dc7809e-fb0b-4e26-ad3b-45aceb483265"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.524361 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p" (OuterVolumeSpecName: "kube-api-access-dps9p") pod "1dc7809e-fb0b-4e26-ad3b-45aceb483265" (UID: "1dc7809e-fb0b-4e26-ad3b-45aceb483265"). InnerVolumeSpecName "kube-api-access-dps9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621213 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") pod \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621365 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") pod \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621392 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") pod \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621419 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") pod \"e4d3108c-ea49-46b7-896a-6303b5651abc\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621544 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") pod \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621587 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") pod \"e4d3108c-ea49-46b7-896a-6303b5651abc\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621997 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.622021 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.622032 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.622045 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.622594 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4d3108c-ea49-46b7-896a-6303b5651abc" (UID: "e4d3108c-ea49-46b7-896a-6303b5651abc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.623041 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2455ddf8-bd67-4fe1-821e-0feda40d7da9" (UID: "2455ddf8-bd67-4fe1-821e-0feda40d7da9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.623665 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9aed3d60-8ff4-4b82-9bf8-7892dff01cff" (UID: "9aed3d60-8ff4-4b82-9bf8-7892dff01cff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.627117 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88" (OuterVolumeSpecName: "kube-api-access-bkr88") pod "2455ddf8-bd67-4fe1-821e-0feda40d7da9" (UID: "2455ddf8-bd67-4fe1-821e-0feda40d7da9"). InnerVolumeSpecName "kube-api-access-bkr88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.628113 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn" (OuterVolumeSpecName: "kube-api-access-dz4kn") pod "e4d3108c-ea49-46b7-896a-6303b5651abc" (UID: "e4d3108c-ea49-46b7-896a-6303b5651abc"). InnerVolumeSpecName "kube-api-access-dz4kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.630533 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68" (OuterVolumeSpecName: "kube-api-access-57z68") pod "9aed3d60-8ff4-4b82-9bf8-7892dff01cff" (UID: "9aed3d60-8ff4-4b82-9bf8-7892dff01cff"). InnerVolumeSpecName "kube-api-access-57z68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.643034 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725263 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725293 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725303 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725314 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725322 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725331 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.020129 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c7a-account-create-update-s5pbw" event={"ID":"e4d3108c-ea49-46b7-896a-6303b5651abc","Type":"ContainerDied","Data":"37eaf6f481fc9e5652aa133ddddaacb0e32c7be580e917decf70aee486af0ae2"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.020531 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37eaf6f481fc9e5652aa133ddddaacb0e32c7be580e917decf70aee486af0ae2" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.020628 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.035816 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.035864 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3131-account-create-update-6kp5h" event={"ID":"cf48bf2a-2b8d-41ac-a712-9218091e8352","Type":"ContainerDied","Data":"af8fdcd790caf1f39c9294b7636081ecd279be7604886dcdfc416f7645a5171e"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.035927 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af8fdcd790caf1f39c9294b7636081ecd279be7604886dcdfc416f7645a5171e" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.043897 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdvd7" event={"ID":"2455ddf8-bd67-4fe1-821e-0feda40d7da9","Type":"ContainerDied","Data":"448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.043939 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.043943 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.053298 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cc7jv" event={"ID":"ccf851bf-a272-4c0a-99a1-97c464d23a0d","Type":"ContainerDied","Data":"e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.053336 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.053406 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.063997 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dvx6" event={"ID":"9aed3d60-8ff4-4b82-9bf8-7892dff01cff","Type":"ContainerDied","Data":"d76b77f287ea9dec0ef0421a43060b12341366ebba74f0e217fcb0bea0847d57"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.064034 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d76b77f287ea9dec0ef0421a43060b12341366ebba74f0e217fcb0bea0847d57" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.064073 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.066934 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.067096 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a9b7-account-create-update-sf97j" event={"ID":"1dc7809e-fb0b-4e26-ad3b-45aceb483265","Type":"ContainerDied","Data":"49a1c134769a44dcbd496872d4d2a6b24f18b38e3711f4fc9046bd67e56974d5"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.067125 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49a1c134769a44dcbd496872d4d2a6b24f18b38e3711f4fc9046bd67e56974d5" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.272453 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:20 crc kubenswrapper[4656]: W0128 15:38:20.277260 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2a47883_aa14_4711_9dad_f6d38bcc706d.slice/crio-b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5 WatchSource:0}: Error finding container b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5: Status 404 returned error can't find the container with id b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5 Jan 28 15:38:21 crc kubenswrapper[4656]: I0128 15:38:21.076038 4656 generic.go:334] "Generic (PLEG): container finished" podID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerID="c32f049d51119c0894e074dc9838f05cf80306bb19c54ff776651984f935948f" exitCode=0 Jan 28 15:38:21 crc kubenswrapper[4656]: I0128 15:38:21.076585 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerDied","Data":"c32f049d51119c0894e074dc9838f05cf80306bb19c54ff776651984f935948f"} Jan 28 15:38:21 crc kubenswrapper[4656]: I0128 15:38:21.076639 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerStarted","Data":"b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5"} Jan 28 15:38:22 crc kubenswrapper[4656]: I0128 15:38:22.086313 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerStarted","Data":"d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1"} Jan 28 15:38:22 crc kubenswrapper[4656]: I0128 15:38:22.086691 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:22 crc kubenswrapper[4656]: I0128 15:38:22.110812 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" podStartSLOduration=3.110790236 podStartE2EDuration="3.110790236s" podCreationTimestamp="2026-01-28 15:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:22.105102401 +0000 UTC m=+1192.613273205" watchObservedRunningTime="2026-01-28 15:38:22.110790236 +0000 UTC m=+1192.618961030" Jan 28 15:38:26 crc kubenswrapper[4656]: I0128 15:38:26.119476 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-842wg" event={"ID":"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c","Type":"ContainerStarted","Data":"b5f01d437841162f7d723c8a3461fbed8e9508da364b69783ba5047744e009bb"} Jan 28 15:38:26 crc kubenswrapper[4656]: I0128 15:38:26.144887 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-842wg" podStartSLOduration=1.9414290090000001 podStartE2EDuration="35.144859862s" podCreationTimestamp="2026-01-28 15:37:51 +0000 UTC" firstStartedPulling="2026-01-28 15:37:51.95858719 +0000 UTC m=+1162.466758004" lastFinishedPulling="2026-01-28 15:38:25.162018043 +0000 UTC m=+1195.670188857" observedRunningTime="2026-01-28 15:38:26.141308084 +0000 UTC m=+1196.649478888" watchObservedRunningTime="2026-01-28 15:38:26.144859862 +0000 UTC m=+1196.653030676" Jan 28 15:38:29 crc kubenswrapper[4656]: I0128 15:38:29.645459 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:29 crc kubenswrapper[4656]: I0128 15:38:29.722747 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:38:29 crc kubenswrapper[4656]: I0128 15:38:29.722998 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-jpj69" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="dnsmasq-dns" containerID="cri-o://88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37" gracePeriod=10 Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.151841 4656 generic.go:334] "Generic (PLEG): container finished" podID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerID="88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37" exitCode=0 Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.151944 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerDied","Data":"88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37"} Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.152141 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerDied","Data":"51515a2b95a452c7a97ce3ad5d48cea215fd18e12f3ce81f0a7c8990597a60e1"} Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.152203 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51515a2b95a452c7a97ce3ad5d48cea215fd18e12f3ce81f0a7c8990597a60e1" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.157213 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269582 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269809 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269871 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269896 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269953 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.284394 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578" (OuterVolumeSpecName: "kube-api-access-8d578") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "kube-api-access-8d578". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.313372 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.324532 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.336897 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config" (OuterVolumeSpecName: "config") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.337115 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371327 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371377 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371391 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371406 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371414 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:31 crc kubenswrapper[4656]: I0128 15:38:31.159790 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:38:31 crc kubenswrapper[4656]: I0128 15:38:31.210638 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:38:31 crc kubenswrapper[4656]: I0128 15:38:31.221641 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:38:32 crc kubenswrapper[4656]: I0128 15:38:32.169375 4656 generic.go:334] "Generic (PLEG): container finished" podID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" containerID="b5f01d437841162f7d723c8a3461fbed8e9508da364b69783ba5047744e009bb" exitCode=0 Jan 28 15:38:32 crc kubenswrapper[4656]: I0128 15:38:32.169479 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-842wg" event={"ID":"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c","Type":"ContainerDied","Data":"b5f01d437841162f7d723c8a3461fbed8e9508da364b69783ba5047744e009bb"} Jan 28 15:38:33 crc kubenswrapper[4656]: I0128 15:38:33.179525 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" path="/var/lib/kubelet/pods/c152e3b8-7b70-4580-988e-4cf053f87aa2/volumes" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.542708 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-842wg" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.623609 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") pod \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.623749 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") pod \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.623778 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") pod \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.623811 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") pod \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.642843 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" (UID: "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.643470 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d" (OuterVolumeSpecName: "kube-api-access-x6d9d") pod "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" (UID: "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c"). InnerVolumeSpecName "kube-api-access-x6d9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.652861 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" (UID: "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.672621 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data" (OuterVolumeSpecName: "config-data") pod "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" (UID: "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.725143 4656 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.725186 4656 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.725196 4656 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.725206 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.187784 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-842wg" event={"ID":"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c","Type":"ContainerDied","Data":"504211dfa97dc92df86e8b70c17397351feacd00a97b729f5ccfa3e6d0b19223"} Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.188057 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="504211dfa97dc92df86e8b70c17397351feacd00a97b729f5ccfa3e6d0b19223" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.187852 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-842wg" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624034 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8fng8"] Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624469 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" containerName="glance-db-sync" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624494 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" containerName="glance-db-sync" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624511 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d3108c-ea49-46b7-896a-6303b5651abc" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624520 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d3108c-ea49-46b7-896a-6303b5651abc" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624532 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf48bf2a-2b8d-41ac-a712-9218091e8352" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624539 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf48bf2a-2b8d-41ac-a712-9218091e8352" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624553 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624562 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624578 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624586 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624599 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="dnsmasq-dns" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624606 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="dnsmasq-dns" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624640 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="init" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624649 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="init" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624660 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624667 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624683 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624698 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624945 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624976 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624986 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d3108c-ea49-46b7-896a-6303b5651abc" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624996 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="dnsmasq-dns" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.625008 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" containerName="glance-db-sync" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.625027 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.625040 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf48bf2a-2b8d-41ac-a712-9218091e8352" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.625052 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.626173 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.662298 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8fng8"] Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794370 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794467 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794542 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794584 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-config\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794649 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794706 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cr8h\" (UniqueName: \"kubernetes.io/projected/0d7bd9aa-43b5-4819-9bef-a61574670ba6-kube-api-access-7cr8h\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.895989 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896054 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896113 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896146 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-config\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896242 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896414 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cr8h\" (UniqueName: \"kubernetes.io/projected/0d7bd9aa-43b5-4819-9bef-a61574670ba6-kube-api-access-7cr8h\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897332 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897349 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897392 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-config\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897510 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897728 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.917177 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cr8h\" (UniqueName: \"kubernetes.io/projected/0d7bd9aa-43b5-4819-9bef-a61574670ba6-kube-api-access-7cr8h\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.943756 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:35 crc kubenswrapper[4656]: I0128 15:38:35.455253 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8fng8"] Jan 28 15:38:35 crc kubenswrapper[4656]: W0128 15:38:35.455316 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d7bd9aa_43b5_4819_9bef_a61574670ba6.slice/crio-42f926b63c13038ba967b33cf82f2a497052b6a1c4148c749b3d09afae66ae61 WatchSource:0}: Error finding container 42f926b63c13038ba967b33cf82f2a497052b6a1c4148c749b3d09afae66ae61: Status 404 returned error can't find the container with id 42f926b63c13038ba967b33cf82f2a497052b6a1c4148c749b3d09afae66ae61 Jan 28 15:38:36 crc kubenswrapper[4656]: I0128 15:38:36.208039 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" event={"ID":"0d7bd9aa-43b5-4819-9bef-a61574670ba6","Type":"ContainerStarted","Data":"42f926b63c13038ba967b33cf82f2a497052b6a1c4148c749b3d09afae66ae61"} Jan 28 15:38:37 crc kubenswrapper[4656]: I0128 15:38:37.220406 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" event={"ID":"0d7bd9aa-43b5-4819-9bef-a61574670ba6","Type":"ContainerStarted","Data":"9315b2ae2af35b8d1c9ebe22d5f2551303838ed4de4faf1ae9b74ba2e0e4fbd1"} Jan 28 15:38:38 crc kubenswrapper[4656]: I0128 15:38:38.230580 4656 generic.go:334] "Generic (PLEG): container finished" podID="0d7bd9aa-43b5-4819-9bef-a61574670ba6" containerID="9315b2ae2af35b8d1c9ebe22d5f2551303838ed4de4faf1ae9b74ba2e0e4fbd1" exitCode=0 Jan 28 15:38:38 crc kubenswrapper[4656]: I0128 15:38:38.230631 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" event={"ID":"0d7bd9aa-43b5-4819-9bef-a61574670ba6","Type":"ContainerDied","Data":"9315b2ae2af35b8d1c9ebe22d5f2551303838ed4de4faf1ae9b74ba2e0e4fbd1"} Jan 28 15:38:39 crc kubenswrapper[4656]: I0128 15:38:39.245040 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" event={"ID":"0d7bd9aa-43b5-4819-9bef-a61574670ba6","Type":"ContainerStarted","Data":"d090df0d1ab62b9ad7988b6f8160bc07dccc14adaccb0b2d23c9f7106b85dd58"} Jan 28 15:38:39 crc kubenswrapper[4656]: I0128 15:38:39.245596 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:39 crc kubenswrapper[4656]: I0128 15:38:39.272368 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" podStartSLOduration=5.272346077 podStartE2EDuration="5.272346077s" podCreationTimestamp="2026-01-28 15:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:39.268129262 +0000 UTC m=+1209.776300056" watchObservedRunningTime="2026-01-28 15:38:39.272346077 +0000 UTC m=+1209.780516891" Jan 28 15:38:41 crc kubenswrapper[4656]: I0128 15:38:41.263808 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:38:41 crc kubenswrapper[4656]: I0128 15:38:41.264186 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:38:44 crc kubenswrapper[4656]: I0128 15:38:44.945450 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.003884 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.004191 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="dnsmasq-dns" containerID="cri-o://d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1" gracePeriod=10 Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.303432 4656 generic.go:334] "Generic (PLEG): container finished" podID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerID="d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1" exitCode=0 Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.303507 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerDied","Data":"d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1"} Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.523277 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674620 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674689 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674735 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674806 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674923 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674982 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.680822 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx" (OuterVolumeSpecName: "kube-api-access-xqvqx") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "kube-api-access-xqvqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.726677 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.734946 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.740148 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config" (OuterVolumeSpecName: "config") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.740721 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.744555 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.777905 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.777978 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.777992 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.778467 4656 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.778483 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.778491 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.313589 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerDied","Data":"b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5"} Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.313682 4656 scope.go:117] "RemoveContainer" containerID="d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1" Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.313909 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.351345 4656 scope.go:117] "RemoveContainer" containerID="c32f049d51119c0894e074dc9838f05cf80306bb19c54ff776651984f935948f" Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.359298 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.372475 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:47 crc kubenswrapper[4656]: I0128 15:38:47.183444 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" path="/var/lib/kubelet/pods/a2a47883-aa14-4711-9dad-f6d38bcc706d/volumes" Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.263878 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.264685 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.264759 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.265740 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.265832 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441" gracePeriod=600 Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.528127 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441" exitCode=0 Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.528193 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441"} Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.528271 4656 scope.go:117] "RemoveContainer" containerID="45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f" Jan 28 15:39:12 crc kubenswrapper[4656]: I0128 15:39:12.536658 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98"} Jan 28 15:41:11 crc kubenswrapper[4656]: I0128 15:41:11.263831 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:41:11 crc kubenswrapper[4656]: I0128 15:41:11.264517 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.101998 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:41:40 crc kubenswrapper[4656]: E0128 15:41:40.102967 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="init" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.102989 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="init" Jan 28 15:41:40 crc kubenswrapper[4656]: E0128 15:41:40.103015 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="dnsmasq-dns" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.103024 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="dnsmasq-dns" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.105881 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="dnsmasq-dns" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.107817 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.136536 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.166881 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.167398 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.167597 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.271843 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.272662 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.273508 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.273753 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.274189 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.293477 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.476366 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:41 crc kubenswrapper[4656]: I0128 15:41:41.022994 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:41:41 crc kubenswrapper[4656]: I0128 15:41:41.081802 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerStarted","Data":"bb25b8fc8ba237291897b37ef6bca9be8ae874b0b683f20ca0a47026007eccec"} Jan 28 15:41:41 crc kubenswrapper[4656]: I0128 15:41:41.264094 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:41:41 crc kubenswrapper[4656]: I0128 15:41:41.264256 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:41:42 crc kubenswrapper[4656]: I0128 15:41:42.092124 4656 generic.go:334] "Generic (PLEG): container finished" podID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerID="7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59" exitCode=0 Jan 28 15:41:42 crc kubenswrapper[4656]: I0128 15:41:42.092191 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerDied","Data":"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59"} Jan 28 15:41:42 crc kubenswrapper[4656]: I0128 15:41:42.095128 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:41:44 crc kubenswrapper[4656]: I0128 15:41:44.111905 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerStarted","Data":"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473"} Jan 28 15:41:45 crc kubenswrapper[4656]: I0128 15:41:45.121918 4656 generic.go:334] "Generic (PLEG): container finished" podID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerID="929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473" exitCode=0 Jan 28 15:41:45 crc kubenswrapper[4656]: I0128 15:41:45.121960 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerDied","Data":"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473"} Jan 28 15:41:50 crc kubenswrapper[4656]: I0128 15:41:50.164779 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerStarted","Data":"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570"} Jan 28 15:41:50 crc kubenswrapper[4656]: I0128 15:41:50.190793 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hn7cf" podStartSLOduration=2.537875113 podStartE2EDuration="10.190764837s" podCreationTimestamp="2026-01-28 15:41:40 +0000 UTC" firstStartedPulling="2026-01-28 15:41:42.094721099 +0000 UTC m=+1392.602891903" lastFinishedPulling="2026-01-28 15:41:49.747610823 +0000 UTC m=+1400.255781627" observedRunningTime="2026-01-28 15:41:50.180691167 +0000 UTC m=+1400.688861981" watchObservedRunningTime="2026-01-28 15:41:50.190764837 +0000 UTC m=+1400.698935641" Jan 28 15:41:50 crc kubenswrapper[4656]: I0128 15:41:50.478466 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:50 crc kubenswrapper[4656]: I0128 15:41:50.478778 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:51 crc kubenswrapper[4656]: I0128 15:41:51.534743 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hn7cf" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" probeResult="failure" output=< Jan 28 15:41:51 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:41:51 crc kubenswrapper[4656]: > Jan 28 15:42:00 crc kubenswrapper[4656]: I0128 15:42:00.574217 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:42:00 crc kubenswrapper[4656]: I0128 15:42:00.685396 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:42:02 crc kubenswrapper[4656]: I0128 15:42:02.470622 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:42:02 crc kubenswrapper[4656]: I0128 15:42:02.471065 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hn7cf" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" containerID="cri-o://000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" gracePeriod=2 Jan 28 15:42:02 crc kubenswrapper[4656]: I0128 15:42:02.958550 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.070913 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") pod \"64a04b9d-2d5a-4095-9292-c7d5de74f369\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.071051 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") pod \"64a04b9d-2d5a-4095-9292-c7d5de74f369\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.072316 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") pod \"64a04b9d-2d5a-4095-9292-c7d5de74f369\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.073117 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities" (OuterVolumeSpecName: "utilities") pod "64a04b9d-2d5a-4095-9292-c7d5de74f369" (UID: "64a04b9d-2d5a-4095-9292-c7d5de74f369"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.076698 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx" (OuterVolumeSpecName: "kube-api-access-kftrx") pod "64a04b9d-2d5a-4095-9292-c7d5de74f369" (UID: "64a04b9d-2d5a-4095-9292-c7d5de74f369"). InnerVolumeSpecName "kube-api-access-kftrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.176940 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.176968 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.216621 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64a04b9d-2d5a-4095-9292-c7d5de74f369" (UID: "64a04b9d-2d5a-4095-9292-c7d5de74f369"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.269155 4656 generic.go:334] "Generic (PLEG): container finished" podID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerID="000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" exitCode=0 Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.269328 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.269317 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerDied","Data":"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570"} Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.270467 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerDied","Data":"bb25b8fc8ba237291897b37ef6bca9be8ae874b0b683f20ca0a47026007eccec"} Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.270515 4656 scope.go:117] "RemoveContainer" containerID="000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.279138 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.294829 4656 scope.go:117] "RemoveContainer" containerID="929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.312527 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.317445 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.324215 4656 scope.go:117] "RemoveContainer" containerID="7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.365947 4656 scope.go:117] "RemoveContainer" containerID="000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" Jan 28 15:42:03 crc kubenswrapper[4656]: E0128 15:42:03.366472 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570\": container with ID starting with 000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570 not found: ID does not exist" containerID="000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.366513 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570"} err="failed to get container status \"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570\": rpc error: code = NotFound desc = could not find container \"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570\": container with ID starting with 000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570 not found: ID does not exist" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.366536 4656 scope.go:117] "RemoveContainer" containerID="929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473" Jan 28 15:42:03 crc kubenswrapper[4656]: E0128 15:42:03.366882 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473\": container with ID starting with 929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473 not found: ID does not exist" containerID="929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.366919 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473"} err="failed to get container status \"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473\": rpc error: code = NotFound desc = could not find container \"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473\": container with ID starting with 929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473 not found: ID does not exist" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.366941 4656 scope.go:117] "RemoveContainer" containerID="7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59" Jan 28 15:42:03 crc kubenswrapper[4656]: E0128 15:42:03.367194 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59\": container with ID starting with 7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59 not found: ID does not exist" containerID="7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.367220 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59"} err="failed to get container status \"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59\": rpc error: code = NotFound desc = could not find container \"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59\": container with ID starting with 7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59 not found: ID does not exist" Jan 28 15:42:05 crc kubenswrapper[4656]: I0128 15:42:05.180012 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" path="/var/lib/kubelet/pods/64a04b9d-2d5a-4095-9292-c7d5de74f369/volumes" Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.281070 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.281687 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.281756 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.282368 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.282445 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98" gracePeriod=600 Jan 28 15:42:12 crc kubenswrapper[4656]: I0128 15:42:12.342334 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98" exitCode=0 Jan 28 15:42:12 crc kubenswrapper[4656]: I0128 15:42:12.342399 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98"} Jan 28 15:42:12 crc kubenswrapper[4656]: I0128 15:42:12.342623 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9"} Jan 28 15:42:12 crc kubenswrapper[4656]: I0128 15:42:12.342644 4656 scope.go:117] "RemoveContainer" containerID="e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441" Jan 28 15:43:32 crc kubenswrapper[4656]: I0128 15:43:32.228608 4656 scope.go:117] "RemoveContainer" containerID="58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6" Jan 28 15:43:32 crc kubenswrapper[4656]: I0128 15:43:32.263406 4656 scope.go:117] "RemoveContainer" containerID="3f3fb240badabe1f5e9c9d62c574ea57eeefcfffb2c7d9f13df634359844b112" Jan 28 15:43:32 crc kubenswrapper[4656]: I0128 15:43:32.294141 4656 scope.go:117] "RemoveContainer" containerID="0c99690cd99ff76bd306e4eb9caf6e9e98dd2c44cd01b18972ac6c91a1b608d2" Jan 28 15:43:32 crc kubenswrapper[4656]: I0128 15:43:32.331456 4656 scope.go:117] "RemoveContainer" containerID="88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.619432 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:34 crc kubenswrapper[4656]: E0128 15:43:34.620128 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="extract-utilities" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.620155 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="extract-utilities" Jan 28 15:43:34 crc kubenswrapper[4656]: E0128 15:43:34.620204 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.620211 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" Jan 28 15:43:34 crc kubenswrapper[4656]: E0128 15:43:34.620228 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="extract-content" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.620235 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="extract-content" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.620405 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.621675 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.736762 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.825235 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.825312 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.825993 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.973185 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.973262 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.973310 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.973906 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.974214 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:35 crc kubenswrapper[4656]: I0128 15:43:35.028201 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:35 crc kubenswrapper[4656]: I0128 15:43:35.034617 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:35 crc kubenswrapper[4656]: I0128 15:43:35.828703 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:36 crc kubenswrapper[4656]: I0128 15:43:36.152320 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerStarted","Data":"2703433bec4599503b855edbd1fb46c81b97219681e70741b08613e845a37391"} Jan 28 15:43:37 crc kubenswrapper[4656]: I0128 15:43:37.162036 4656 generic.go:334] "Generic (PLEG): container finished" podID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerID="f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7" exitCode=0 Jan 28 15:43:37 crc kubenswrapper[4656]: I0128 15:43:37.162082 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerDied","Data":"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7"} Jan 28 15:43:39 crc kubenswrapper[4656]: I0128 15:43:39.193467 4656 generic.go:334] "Generic (PLEG): container finished" podID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerID="8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013" exitCode=0 Jan 28 15:43:39 crc kubenswrapper[4656]: I0128 15:43:39.193855 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerDied","Data":"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013"} Jan 28 15:43:41 crc kubenswrapper[4656]: I0128 15:43:41.218450 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerStarted","Data":"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270"} Jan 28 15:43:42 crc kubenswrapper[4656]: I0128 15:43:42.261585 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b87zg" podStartSLOduration=4.604751798 podStartE2EDuration="8.261553463s" podCreationTimestamp="2026-01-28 15:43:34 +0000 UTC" firstStartedPulling="2026-01-28 15:43:37.164638667 +0000 UTC m=+1507.672809461" lastFinishedPulling="2026-01-28 15:43:40.821440322 +0000 UTC m=+1511.329611126" observedRunningTime="2026-01-28 15:43:42.253325168 +0000 UTC m=+1512.761496002" watchObservedRunningTime="2026-01-28 15:43:42.261553463 +0000 UTC m=+1512.769724267" Jan 28 15:43:45 crc kubenswrapper[4656]: I0128 15:43:45.035798 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:45 crc kubenswrapper[4656]: I0128 15:43:45.036152 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:45 crc kubenswrapper[4656]: I0128 15:43:45.083869 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.087788 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.140615 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.328835 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b87zg" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="registry-server" containerID="cri-o://ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" gracePeriod=2 Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.799801 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.985980 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") pod \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.986308 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") pod \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.986371 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") pod \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.986871 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities" (OuterVolumeSpecName: "utilities") pod "8e9ee302-a975-4344-9d7e-0f1ee6594a55" (UID: "8e9ee302-a975-4344-9d7e-0f1ee6594a55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.995832 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9" (OuterVolumeSpecName: "kube-api-access-krdl9") pod "8e9ee302-a975-4344-9d7e-0f1ee6594a55" (UID: "8e9ee302-a975-4344-9d7e-0f1ee6594a55"). InnerVolumeSpecName "kube-api-access-krdl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.045442 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e9ee302-a975-4344-9d7e-0f1ee6594a55" (UID: "8e9ee302-a975-4344-9d7e-0f1ee6594a55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.087819 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.087858 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.087868 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.340320 4656 generic.go:334] "Generic (PLEG): container finished" podID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerID="ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" exitCode=0 Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.340418 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.340416 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerDied","Data":"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270"} Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.341241 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerDied","Data":"2703433bec4599503b855edbd1fb46c81b97219681e70741b08613e845a37391"} Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.341291 4656 scope.go:117] "RemoveContainer" containerID="ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.369847 4656 scope.go:117] "RemoveContainer" containerID="8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.406488 4656 scope.go:117] "RemoveContainer" containerID="f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.411315 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.420482 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.429151 4656 scope.go:117] "RemoveContainer" containerID="ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" Jan 28 15:43:56 crc kubenswrapper[4656]: E0128 15:43:56.430184 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270\": container with ID starting with ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270 not found: ID does not exist" containerID="ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.430240 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270"} err="failed to get container status \"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270\": rpc error: code = NotFound desc = could not find container \"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270\": container with ID starting with ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270 not found: ID does not exist" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.430263 4656 scope.go:117] "RemoveContainer" containerID="8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013" Jan 28 15:43:56 crc kubenswrapper[4656]: E0128 15:43:56.430626 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013\": container with ID starting with 8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013 not found: ID does not exist" containerID="8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.430669 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013"} err="failed to get container status \"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013\": rpc error: code = NotFound desc = could not find container \"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013\": container with ID starting with 8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013 not found: ID does not exist" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.430700 4656 scope.go:117] "RemoveContainer" containerID="f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7" Jan 28 15:43:56 crc kubenswrapper[4656]: E0128 15:43:56.431011 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7\": container with ID starting with f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7 not found: ID does not exist" containerID="f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.431035 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7"} err="failed to get container status \"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7\": rpc error: code = NotFound desc = could not find container \"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7\": container with ID starting with f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7 not found: ID does not exist" Jan 28 15:43:57 crc kubenswrapper[4656]: I0128 15:43:57.180105 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" path="/var/lib/kubelet/pods/8e9ee302-a975-4344-9d7e-0f1ee6594a55/volumes" Jan 28 15:44:11 crc kubenswrapper[4656]: I0128 15:44:11.264116 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:44:11 crc kubenswrapper[4656]: I0128 15:44:11.264720 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.356181 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:23 crc kubenswrapper[4656]: E0128 15:44:23.357066 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="extract-utilities" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.357088 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="extract-utilities" Jan 28 15:44:23 crc kubenswrapper[4656]: E0128 15:44:23.357115 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="registry-server" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.357121 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="registry-server" Jan 28 15:44:23 crc kubenswrapper[4656]: E0128 15:44:23.357138 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="extract-content" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.357144 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="extract-content" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.357325 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="registry-server" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.358484 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.377943 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.491686 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.491750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.491822 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.593462 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.593530 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.593623 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.595247 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.595546 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.627774 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.679918 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:24 crc kubenswrapper[4656]: I0128 15:44:24.013159 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:24 crc kubenswrapper[4656]: I0128 15:44:24.708530 4656 generic.go:334] "Generic (PLEG): container finished" podID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerID="97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070" exitCode=0 Jan 28 15:44:24 crc kubenswrapper[4656]: I0128 15:44:24.708585 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerDied","Data":"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070"} Jan 28 15:44:24 crc kubenswrapper[4656]: I0128 15:44:24.708622 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerStarted","Data":"bfc76d00ee3c2f24f1fa3ce1dd435cc123390dfbf2cea11eefc282f8669895aa"} Jan 28 15:44:26 crc kubenswrapper[4656]: I0128 15:44:26.724579 4656 generic.go:334] "Generic (PLEG): container finished" podID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerID="0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1" exitCode=0 Jan 28 15:44:26 crc kubenswrapper[4656]: I0128 15:44:26.725140 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerDied","Data":"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1"} Jan 28 15:44:27 crc kubenswrapper[4656]: I0128 15:44:27.740256 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerStarted","Data":"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a"} Jan 28 15:44:27 crc kubenswrapper[4656]: I0128 15:44:27.782586 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dwqxn" podStartSLOduration=2.40037058 podStartE2EDuration="4.782555509s" podCreationTimestamp="2026-01-28 15:44:23 +0000 UTC" firstStartedPulling="2026-01-28 15:44:24.710998225 +0000 UTC m=+1555.219169029" lastFinishedPulling="2026-01-28 15:44:27.093183154 +0000 UTC m=+1557.601353958" observedRunningTime="2026-01-28 15:44:27.771791181 +0000 UTC m=+1558.279961985" watchObservedRunningTime="2026-01-28 15:44:27.782555509 +0000 UTC m=+1558.290726313" Jan 28 15:44:32 crc kubenswrapper[4656]: I0128 15:44:32.412706 4656 scope.go:117] "RemoveContainer" containerID="83b94c30ed93547fd506be79bae97518f5ab107ea9f58e554bb480d642b3aaf6" Jan 28 15:44:32 crc kubenswrapper[4656]: I0128 15:44:32.440306 4656 scope.go:117] "RemoveContainer" containerID="bdbd8b9752b666c133af6189feff3118817c7063c0dd053bb54b7a6f4c3b19d7" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.681021 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.681092 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.726319 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.843521 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.967012 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:35 crc kubenswrapper[4656]: I0128 15:44:35.816428 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dwqxn" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="registry-server" containerID="cri-o://3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" gracePeriod=2 Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.324063 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.482892 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") pod \"14c94db8-ad16-4f24-aa39-c2d5248d4961\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.483036 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") pod \"14c94db8-ad16-4f24-aa39-c2d5248d4961\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.483254 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") pod \"14c94db8-ad16-4f24-aa39-c2d5248d4961\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.484749 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities" (OuterVolumeSpecName: "utilities") pod "14c94db8-ad16-4f24-aa39-c2d5248d4961" (UID: "14c94db8-ad16-4f24-aa39-c2d5248d4961"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.489267 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7" (OuterVolumeSpecName: "kube-api-access-wm5k7") pod "14c94db8-ad16-4f24-aa39-c2d5248d4961" (UID: "14c94db8-ad16-4f24-aa39-c2d5248d4961"). InnerVolumeSpecName "kube-api-access-wm5k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.516220 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c94db8-ad16-4f24-aa39-c2d5248d4961" (UID: "14c94db8-ad16-4f24-aa39-c2d5248d4961"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.585274 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.585470 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.585567 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") on node \"crc\" DevicePath \"\"" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.878779 4656 generic.go:334] "Generic (PLEG): container finished" podID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerID="3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" exitCode=0 Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.879968 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerDied","Data":"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a"} Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.880080 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerDied","Data":"bfc76d00ee3c2f24f1fa3ce1dd435cc123390dfbf2cea11eefc282f8669895aa"} Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.880208 4656 scope.go:117] "RemoveContainer" containerID="3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.880485 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.926352 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.931232 4656 scope.go:117] "RemoveContainer" containerID="0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.935943 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.954365 4656 scope.go:117] "RemoveContainer" containerID="97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.991571 4656 scope.go:117] "RemoveContainer" containerID="3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" Jan 28 15:44:36 crc kubenswrapper[4656]: E0128 15:44:36.992136 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a\": container with ID starting with 3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a not found: ID does not exist" containerID="3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.992349 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a"} err="failed to get container status \"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a\": rpc error: code = NotFound desc = could not find container \"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a\": container with ID starting with 3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a not found: ID does not exist" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.992395 4656 scope.go:117] "RemoveContainer" containerID="0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1" Jan 28 15:44:36 crc kubenswrapper[4656]: E0128 15:44:36.992679 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1\": container with ID starting with 0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1 not found: ID does not exist" containerID="0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.992718 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1"} err="failed to get container status \"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1\": rpc error: code = NotFound desc = could not find container \"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1\": container with ID starting with 0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1 not found: ID does not exist" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.992749 4656 scope.go:117] "RemoveContainer" containerID="97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070" Jan 28 15:44:36 crc kubenswrapper[4656]: E0128 15:44:36.993014 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070\": container with ID starting with 97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070 not found: ID does not exist" containerID="97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.993039 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070"} err="failed to get container status \"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070\": rpc error: code = NotFound desc = could not find container \"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070\": container with ID starting with 97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070 not found: ID does not exist" Jan 28 15:44:37 crc kubenswrapper[4656]: I0128 15:44:37.200448 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" path="/var/lib/kubelet/pods/14c94db8-ad16-4f24-aa39-c2d5248d4961/volumes" Jan 28 15:44:41 crc kubenswrapper[4656]: I0128 15:44:41.264336 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:44:41 crc kubenswrapper[4656]: I0128 15:44:41.264906 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.174132 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r"] Jan 28 15:45:00 crc kubenswrapper[4656]: E0128 15:45:00.175137 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="extract-utilities" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.175173 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="extract-utilities" Jan 28 15:45:00 crc kubenswrapper[4656]: E0128 15:45:00.175210 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="extract-content" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.175217 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="extract-content" Jan 28 15:45:00 crc kubenswrapper[4656]: E0128 15:45:00.175232 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.175239 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.175464 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.176996 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.179719 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.180023 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.193420 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r"] Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.363063 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.363413 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.363610 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.464676 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.464792 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.464851 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.465769 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.475360 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.487494 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.498385 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.948620 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r"] Jan 28 15:45:01 crc kubenswrapper[4656]: I0128 15:45:01.098345 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" event={"ID":"bd321e99-8c7b-4ecc-8810-69978ab0d329","Type":"ContainerStarted","Data":"3d5a796e4b4b584a6d6e61b520c24a54b5dcfc28565e15bbf3f46659f5dcdd0a"} Jan 28 15:45:02 crc kubenswrapper[4656]: I0128 15:45:02.106793 4656 generic.go:334] "Generic (PLEG): container finished" podID="bd321e99-8c7b-4ecc-8810-69978ab0d329" containerID="5f9b0baaeb255e44970614227fb5114a7cc5769a0081687591976fc80050f1ac" exitCode=0 Jan 28 15:45:02 crc kubenswrapper[4656]: I0128 15:45:02.107113 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" event={"ID":"bd321e99-8c7b-4ecc-8810-69978ab0d329","Type":"ContainerDied","Data":"5f9b0baaeb255e44970614227fb5114a7cc5769a0081687591976fc80050f1ac"} Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.409535 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.512272 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") pod \"bd321e99-8c7b-4ecc-8810-69978ab0d329\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.512389 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") pod \"bd321e99-8c7b-4ecc-8810-69978ab0d329\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.512429 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") pod \"bd321e99-8c7b-4ecc-8810-69978ab0d329\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.513316 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd321e99-8c7b-4ecc-8810-69978ab0d329" (UID: "bd321e99-8c7b-4ecc-8810-69978ab0d329"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.522543 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd321e99-8c7b-4ecc-8810-69978ab0d329" (UID: "bd321e99-8c7b-4ecc-8810-69978ab0d329"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.523585 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5" (OuterVolumeSpecName: "kube-api-access-bsqg5") pod "bd321e99-8c7b-4ecc-8810-69978ab0d329" (UID: "bd321e99-8c7b-4ecc-8810-69978ab0d329"). InnerVolumeSpecName "kube-api-access-bsqg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.614083 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.614111 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.614124 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:04 crc kubenswrapper[4656]: I0128 15:45:04.123837 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" event={"ID":"bd321e99-8c7b-4ecc-8810-69978ab0d329","Type":"ContainerDied","Data":"3d5a796e4b4b584a6d6e61b520c24a54b5dcfc28565e15bbf3f46659f5dcdd0a"} Jan 28 15:45:04 crc kubenswrapper[4656]: I0128 15:45:04.123875 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d5a796e4b4b584a6d6e61b520c24a54b5dcfc28565e15bbf3f46659f5dcdd0a" Jan 28 15:45:04 crc kubenswrapper[4656]: I0128 15:45:04.123916 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.263704 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.265044 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.265177 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.265906 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.266040 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" gracePeriod=600 Jan 28 15:45:11 crc kubenswrapper[4656]: E0128 15:45:11.401980 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.490539 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:11 crc kubenswrapper[4656]: E0128 15:45:11.490907 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd321e99-8c7b-4ecc-8810-69978ab0d329" containerName="collect-profiles" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.490920 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd321e99-8c7b-4ecc-8810-69978ab0d329" containerName="collect-profiles" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.491093 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd321e99-8c7b-4ecc-8810-69978ab0d329" containerName="collect-profiles" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.492215 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.510733 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.642937 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.643664 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.643842 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745123 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745188 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745242 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745804 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745990 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.772904 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.810814 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.191598 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" exitCode=0 Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.191994 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9"} Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.192078 4656 scope.go:117] "RemoveContainer" containerID="42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98" Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.192964 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:45:12 crc kubenswrapper[4656]: E0128 15:45:12.193659 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.390440 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:13 crc kubenswrapper[4656]: I0128 15:45:13.204302 4656 generic.go:334] "Generic (PLEG): container finished" podID="f9815183-b48b-4107-a4d5-91d208bc8850" containerID="229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646" exitCode=0 Jan 28 15:45:13 crc kubenswrapper[4656]: I0128 15:45:13.204356 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerDied","Data":"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646"} Jan 28 15:45:13 crc kubenswrapper[4656]: I0128 15:45:13.204383 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerStarted","Data":"bee71b8d16eeb832a63b5c0c256bdb5f880a13bf3f1a0762732f27116d5923dd"} Jan 28 15:45:14 crc kubenswrapper[4656]: I0128 15:45:14.215502 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerStarted","Data":"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98"} Jan 28 15:45:15 crc kubenswrapper[4656]: I0128 15:45:15.224607 4656 generic.go:334] "Generic (PLEG): container finished" podID="f9815183-b48b-4107-a4d5-91d208bc8850" containerID="fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98" exitCode=0 Jan 28 15:45:15 crc kubenswrapper[4656]: I0128 15:45:15.224676 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerDied","Data":"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98"} Jan 28 15:45:16 crc kubenswrapper[4656]: I0128 15:45:16.237824 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerStarted","Data":"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0"} Jan 28 15:45:16 crc kubenswrapper[4656]: I0128 15:45:16.263591 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8k6pt" podStartSLOduration=2.80355046 podStartE2EDuration="5.263569881s" podCreationTimestamp="2026-01-28 15:45:11 +0000 UTC" firstStartedPulling="2026-01-28 15:45:13.206068229 +0000 UTC m=+1603.714239043" lastFinishedPulling="2026-01-28 15:45:15.66608767 +0000 UTC m=+1606.174258464" observedRunningTime="2026-01-28 15:45:16.259524715 +0000 UTC m=+1606.767695519" watchObservedRunningTime="2026-01-28 15:45:16.263569881 +0000 UTC m=+1606.771740685" Jan 28 15:45:21 crc kubenswrapper[4656]: I0128 15:45:21.811271 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:21 crc kubenswrapper[4656]: I0128 15:45:21.811868 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:21 crc kubenswrapper[4656]: I0128 15:45:21.861287 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:22 crc kubenswrapper[4656]: I0128 15:45:22.393682 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:22 crc kubenswrapper[4656]: I0128 15:45:22.445865 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:24 crc kubenswrapper[4656]: I0128 15:45:24.178381 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:45:24 crc kubenswrapper[4656]: E0128 15:45:24.179018 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:45:24 crc kubenswrapper[4656]: I0128 15:45:24.364259 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8k6pt" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="registry-server" containerID="cri-o://e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" gracePeriod=2 Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.024217 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.208132 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") pod \"f9815183-b48b-4107-a4d5-91d208bc8850\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.209301 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") pod \"f9815183-b48b-4107-a4d5-91d208bc8850\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.209443 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") pod \"f9815183-b48b-4107-a4d5-91d208bc8850\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.210130 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities" (OuterVolumeSpecName: "utilities") pod "f9815183-b48b-4107-a4d5-91d208bc8850" (UID: "f9815183-b48b-4107-a4d5-91d208bc8850"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.217521 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss" (OuterVolumeSpecName: "kube-api-access-nszss") pod "f9815183-b48b-4107-a4d5-91d208bc8850" (UID: "f9815183-b48b-4107-a4d5-91d208bc8850"). InnerVolumeSpecName "kube-api-access-nszss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.289345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9815183-b48b-4107-a4d5-91d208bc8850" (UID: "f9815183-b48b-4107-a4d5-91d208bc8850"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.312676 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.312731 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.312744 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373065 4656 generic.go:334] "Generic (PLEG): container finished" podID="f9815183-b48b-4107-a4d5-91d208bc8850" containerID="e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" exitCode=0 Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373124 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerDied","Data":"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0"} Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373196 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerDied","Data":"bee71b8d16eeb832a63b5c0c256bdb5f880a13bf3f1a0762732f27116d5923dd"} Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373225 4656 scope.go:117] "RemoveContainer" containerID="e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373419 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.411397 4656 scope.go:117] "RemoveContainer" containerID="fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.423263 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.431797 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.436757 4656 scope.go:117] "RemoveContainer" containerID="229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.467835 4656 scope.go:117] "RemoveContainer" containerID="e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" Jan 28 15:45:25 crc kubenswrapper[4656]: E0128 15:45:25.468510 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0\": container with ID starting with e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0 not found: ID does not exist" containerID="e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.468584 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0"} err="failed to get container status \"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0\": rpc error: code = NotFound desc = could not find container \"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0\": container with ID starting with e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0 not found: ID does not exist" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.468631 4656 scope.go:117] "RemoveContainer" containerID="fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98" Jan 28 15:45:25 crc kubenswrapper[4656]: E0128 15:45:25.469145 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98\": container with ID starting with fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98 not found: ID does not exist" containerID="fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.469216 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98"} err="failed to get container status \"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98\": rpc error: code = NotFound desc = could not find container \"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98\": container with ID starting with fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98 not found: ID does not exist" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.469241 4656 scope.go:117] "RemoveContainer" containerID="229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646" Jan 28 15:45:25 crc kubenswrapper[4656]: E0128 15:45:25.469578 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646\": container with ID starting with 229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646 not found: ID does not exist" containerID="229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.469609 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646"} err="failed to get container status \"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646\": rpc error: code = NotFound desc = could not find container \"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646\": container with ID starting with 229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646 not found: ID does not exist" Jan 28 15:45:27 crc kubenswrapper[4656]: I0128 15:45:27.180862 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" path="/var/lib/kubelet/pods/f9815183-b48b-4107-a4d5-91d208bc8850/volumes" Jan 28 15:45:39 crc kubenswrapper[4656]: I0128 15:45:39.171098 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:45:39 crc kubenswrapper[4656]: E0128 15:45:39.171978 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:45:52 crc kubenswrapper[4656]: I0128 15:45:52.171411 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:45:52 crc kubenswrapper[4656]: E0128 15:45:52.172244 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:06 crc kubenswrapper[4656]: I0128 15:46:06.170909 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:06 crc kubenswrapper[4656]: E0128 15:46:06.171762 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:19 crc kubenswrapper[4656]: I0128 15:46:19.171008 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:19 crc kubenswrapper[4656]: E0128 15:46:19.172060 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:30 crc kubenswrapper[4656]: I0128 15:46:30.171684 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:30 crc kubenswrapper[4656]: E0128 15:46:30.172635 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:41 crc kubenswrapper[4656]: I0128 15:46:41.176646 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:41 crc kubenswrapper[4656]: E0128 15:46:41.177557 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:55 crc kubenswrapper[4656]: I0128 15:46:55.170617 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:55 crc kubenswrapper[4656]: E0128 15:46:55.171562 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:07 crc kubenswrapper[4656]: I0128 15:47:07.170671 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:07 crc kubenswrapper[4656]: E0128 15:47:07.171473 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:20 crc kubenswrapper[4656]: I0128 15:47:20.171495 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:20 crc kubenswrapper[4656]: E0128 15:47:20.172329 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:34 crc kubenswrapper[4656]: I0128 15:47:34.171858 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:34 crc kubenswrapper[4656]: E0128 15:47:34.174510 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:46 crc kubenswrapper[4656]: I0128 15:47:46.171877 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:46 crc kubenswrapper[4656]: E0128 15:47:46.172756 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.070915 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.083856 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.095562 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.102500 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.109334 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.115479 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.181526 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" path="/var/lib/kubelet/pods/32493d3d-ca02-451a-b1b0-51d4f82d54f3/volumes" Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.183008 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e7d30a-cf9c-4aa5-880f-e214b8694082" path="/var/lib/kubelet/pods/69e7d30a-cf9c-4aa5-880f-e214b8694082/volumes" Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.183961 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" path="/var/lib/kubelet/pods/e4037dd9-6fe0-4f3a-9fca-4e716126a317/volumes" Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.042575 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.050002 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.056544 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.064450 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.070871 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.076764 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:47:51 crc kubenswrapper[4656]: I0128 15:47:51.180119 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099dbe89-7289-453c-84a2-f0de86b792cf" path="/var/lib/kubelet/pods/099dbe89-7289-453c-84a2-f0de86b792cf/volumes" Jan 28 15:47:51 crc kubenswrapper[4656]: I0128 15:47:51.181177 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a6af64-b874-4449-ae51-8902df8e9bdf" path="/var/lib/kubelet/pods/c4a6af64-b874-4449-ae51-8902df8e9bdf/volumes" Jan 28 15:47:51 crc kubenswrapper[4656]: I0128 15:47:51.181751 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfcf1378-d339-426c-bd01-36cd47172c37" path="/var/lib/kubelet/pods/dfcf1378-d339-426c-bd01-36cd47172c37/volumes" Jan 28 15:47:57 crc kubenswrapper[4656]: I0128 15:47:57.029523 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:47:57 crc kubenswrapper[4656]: I0128 15:47:57.035967 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:47:57 crc kubenswrapper[4656]: I0128 15:47:57.171374 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:57 crc kubenswrapper[4656]: E0128 15:47:57.171619 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:57 crc kubenswrapper[4656]: I0128 15:47:57.180032 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df812f9-220d-45f7-aa2a-c26196ef62e5" path="/var/lib/kubelet/pods/1df812f9-220d-45f7-aa2a-c26196ef62e5/volumes" Jan 28 15:48:10 crc kubenswrapper[4656]: I0128 15:48:10.171330 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:48:10 crc kubenswrapper[4656]: E0128 15:48:10.172181 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:48:19 crc kubenswrapper[4656]: I0128 15:48:19.054952 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:48:19 crc kubenswrapper[4656]: I0128 15:48:19.060892 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:48:19 crc kubenswrapper[4656]: I0128 15:48:19.181095 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" path="/var/lib/kubelet/pods/9aed3d60-8ff4-4b82-9bf8-7892dff01cff/volumes" Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.034885 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.042493 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.049751 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.056736 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.065487 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.075955 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.082604 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.088689 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.094330 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.099659 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.174283 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:48:21 crc kubenswrapper[4656]: E0128 15:48:21.174824 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.187702 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" path="/var/lib/kubelet/pods/1dc7809e-fb0b-4e26-ad3b-45aceb483265/volumes" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.189339 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" path="/var/lib/kubelet/pods/2455ddf8-bd67-4fe1-821e-0feda40d7da9/volumes" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.190509 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" path="/var/lib/kubelet/pods/ccf851bf-a272-4c0a-99a1-97c464d23a0d/volumes" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.191588 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf48bf2a-2b8d-41ac-a712-9218091e8352" path="/var/lib/kubelet/pods/cf48bf2a-2b8d-41ac-a712-9218091e8352/volumes" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.193601 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d3108c-ea49-46b7-896a-6303b5651abc" path="/var/lib/kubelet/pods/e4d3108c-ea49-46b7-896a-6303b5651abc/volumes" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.622561 4656 scope.go:117] "RemoveContainer" containerID="00af603897747403f2999c8dd0ea82db99373770b631a4b1c83fa47765ccde4d" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.646946 4656 scope.go:117] "RemoveContainer" containerID="2b87d65405f263ac45d36349abb81b3c7c7ec205bf002789539910deb242d968" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.686565 4656 scope.go:117] "RemoveContainer" containerID="3dde6ae74f513f5fb2d842f4ed02820c2b2f634f4c0cbc22132cce071c91ef58" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.720056 4656 scope.go:117] "RemoveContainer" containerID="ae2e87be8e9439b8c799f033078d2b5429004fda019d053d0cb63bfa319ab598" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.758657 4656 scope.go:117] "RemoveContainer" containerID="266a34fa5d7bf2b9b096e72f7c9b6721bd869c8c77436d0e440426fb23a20543" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.785494 4656 scope.go:117] "RemoveContainer" containerID="8bb388635794bed8d36ed9b59e0ebe5aa23722a91994b61ca6e7ca5038083d1d" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.826449 4656 scope.go:117] "RemoveContainer" containerID="54efb2e0c946cd8bb2a3f18919e54300b18699ccd32fa90c9ddedf9982d9a734" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.847690 4656 scope.go:117] "RemoveContainer" containerID="d0d1b804843eff8d434244a805f92f46f9f0772ead6e44bf1c0d417856331fa2" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.872679 4656 scope.go:117] "RemoveContainer" containerID="02f441349595b2fea556f764337befa680c75f21f42833da737a52fe0e77200f" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.890933 4656 scope.go:117] "RemoveContainer" containerID="2365be7a0a671764daf45f822757fffcf6c88cb4fb34815a2047bee431512215" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.911860 4656 scope.go:117] "RemoveContainer" containerID="810dbe4d5afc6a6d7cc6184ae641765eef2d6efff2d1a416b9a80f9cc06da73c" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.930530 4656 scope.go:117] "RemoveContainer" containerID="5f4d77dc94bab7589c74c06da6f776ffa81bcf97ec1f8a2199e44a457c390fb6" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.956155 4656 scope.go:117] "RemoveContainer" containerID="6fe56555d0ff628c998638556e5ceea7a70562af71008d6c468acec1f6303046" Jan 28 15:48:34 crc kubenswrapper[4656]: I0128 15:48:34.034655 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:48:34 crc kubenswrapper[4656]: I0128 15:48:34.047957 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:48:35 crc kubenswrapper[4656]: I0128 15:48:35.179271 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" path="/var/lib/kubelet/pods/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c/volumes" Jan 28 15:48:36 crc kubenswrapper[4656]: I0128 15:48:36.170370 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:48:36 crc kubenswrapper[4656]: E0128 15:48:36.170792 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:48:47 crc kubenswrapper[4656]: I0128 15:48:47.170480 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:48:47 crc kubenswrapper[4656]: E0128 15:48:47.171210 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:01 crc kubenswrapper[4656]: I0128 15:49:01.174964 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:01 crc kubenswrapper[4656]: E0128 15:49:01.175472 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:14 crc kubenswrapper[4656]: I0128 15:49:14.172113 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:14 crc kubenswrapper[4656]: E0128 15:49:14.173028 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:26 crc kubenswrapper[4656]: I0128 15:49:26.171426 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:26 crc kubenswrapper[4656]: E0128 15:49:26.172039 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:33 crc kubenswrapper[4656]: I0128 15:49:33.152376 4656 scope.go:117] "RemoveContainer" containerID="b5f01d437841162f7d723c8a3461fbed8e9508da364b69783ba5047744e009bb" Jan 28 15:49:37 crc kubenswrapper[4656]: I0128 15:49:37.171311 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:37 crc kubenswrapper[4656]: E0128 15:49:37.173036 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:50 crc kubenswrapper[4656]: I0128 15:49:50.170827 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:50 crc kubenswrapper[4656]: E0128 15:49:50.171486 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:50:04 crc kubenswrapper[4656]: I0128 15:50:04.171438 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:50:04 crc kubenswrapper[4656]: E0128 15:50:04.173945 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:50:19 crc kubenswrapper[4656]: I0128 15:50:19.171766 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:50:19 crc kubenswrapper[4656]: I0128 15:50:19.982584 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0"} Jan 28 15:52:41 crc kubenswrapper[4656]: I0128 15:52:41.264202 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:52:41 crc kubenswrapper[4656]: I0128 15:52:41.264911 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:53:11 crc kubenswrapper[4656]: I0128 15:53:11.263812 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:53:11 crc kubenswrapper[4656]: I0128 15:53:11.264493 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.263985 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.264680 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.264761 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.265522 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.265597 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0" gracePeriod=600 Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.774647 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0" exitCode=0 Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.774696 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0"} Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.775005 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a"} Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.775054 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.936253 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:53:53 crc kubenswrapper[4656]: E0128 15:53:53.937300 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="extract-utilities" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.937326 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="extract-utilities" Jan 28 15:53:53 crc kubenswrapper[4656]: E0128 15:53:53.937348 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="registry-server" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.937357 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="registry-server" Jan 28 15:53:53 crc kubenswrapper[4656]: E0128 15:53:53.937384 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="extract-content" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.937393 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="extract-content" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.937618 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="registry-server" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.939130 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.951766 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.006157 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.006243 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.006358 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108101 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108209 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108244 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108850 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108847 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.129785 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.263508 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.896492 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:53:55 crc kubenswrapper[4656]: I0128 15:53:55.885648 4656 generic.go:334] "Generic (PLEG): container finished" podID="e57c2aaa-e624-4554-a216-411263758595" containerID="90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878" exitCode=0 Jan 28 15:53:55 crc kubenswrapper[4656]: I0128 15:53:55.886048 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerDied","Data":"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878"} Jan 28 15:53:55 crc kubenswrapper[4656]: I0128 15:53:55.886102 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerStarted","Data":"6264cbc4dfb3251a994990b3e8fa424077d3c85f1ecb113417ca9e2569d22b7c"} Jan 28 15:53:55 crc kubenswrapper[4656]: I0128 15:53:55.890979 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:53:56 crc kubenswrapper[4656]: I0128 15:53:56.895967 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerStarted","Data":"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d"} Jan 28 15:53:57 crc kubenswrapper[4656]: I0128 15:53:57.905080 4656 generic.go:334] "Generic (PLEG): container finished" podID="e57c2aaa-e624-4554-a216-411263758595" containerID="8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d" exitCode=0 Jan 28 15:53:57 crc kubenswrapper[4656]: I0128 15:53:57.905454 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerDied","Data":"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d"} Jan 28 15:53:57 crc kubenswrapper[4656]: I0128 15:53:57.905486 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerStarted","Data":"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19"} Jan 28 15:53:57 crc kubenswrapper[4656]: I0128 15:53:57.930140 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-twk2d" podStartSLOduration=3.498191428 podStartE2EDuration="4.930123764s" podCreationTimestamp="2026-01-28 15:53:53 +0000 UTC" firstStartedPulling="2026-01-28 15:53:55.890614682 +0000 UTC m=+2126.398785486" lastFinishedPulling="2026-01-28 15:53:57.322547018 +0000 UTC m=+2127.830717822" observedRunningTime="2026-01-28 15:53:57.928899429 +0000 UTC m=+2128.437070233" watchObservedRunningTime="2026-01-28 15:53:57.930123764 +0000 UTC m=+2128.438294568" Jan 28 15:54:04 crc kubenswrapper[4656]: I0128 15:54:04.264681 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:04 crc kubenswrapper[4656]: I0128 15:54:04.265488 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:04 crc kubenswrapper[4656]: I0128 15:54:04.308319 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:05 crc kubenswrapper[4656]: I0128 15:54:05.004268 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:05 crc kubenswrapper[4656]: I0128 15:54:05.053345 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.034327 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-twk2d" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="registry-server" containerID="cri-o://0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" gracePeriod=2 Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.500986 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.542580 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") pod \"e57c2aaa-e624-4554-a216-411263758595\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.542664 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") pod \"e57c2aaa-e624-4554-a216-411263758595\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.542708 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") pod \"e57c2aaa-e624-4554-a216-411263758595\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.544116 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities" (OuterVolumeSpecName: "utilities") pod "e57c2aaa-e624-4554-a216-411263758595" (UID: "e57c2aaa-e624-4554-a216-411263758595"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.549407 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5" (OuterVolumeSpecName: "kube-api-access-7sjc5") pod "e57c2aaa-e624-4554-a216-411263758595" (UID: "e57c2aaa-e624-4554-a216-411263758595"). InnerVolumeSpecName "kube-api-access-7sjc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.645085 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.645126 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047134 4656 generic.go:334] "Generic (PLEG): container finished" podID="e57c2aaa-e624-4554-a216-411263758595" containerID="0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" exitCode=0 Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047215 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerDied","Data":"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19"} Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047247 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerDied","Data":"6264cbc4dfb3251a994990b3e8fa424077d3c85f1ecb113417ca9e2569d22b7c"} Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047268 4656 scope.go:117] "RemoveContainer" containerID="0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047430 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.089299 4656 scope.go:117] "RemoveContainer" containerID="8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.195468 4656 scope.go:117] "RemoveContainer" containerID="90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.226883 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e57c2aaa-e624-4554-a216-411263758595" (UID: "e57c2aaa-e624-4554-a216-411263758595"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.271928 4656 scope.go:117] "RemoveContainer" containerID="0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" Jan 28 15:54:08 crc kubenswrapper[4656]: E0128 15:54:08.272714 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19\": container with ID starting with 0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19 not found: ID does not exist" containerID="0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.272785 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19"} err="failed to get container status \"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19\": rpc error: code = NotFound desc = could not find container \"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19\": container with ID starting with 0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19 not found: ID does not exist" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.272857 4656 scope.go:117] "RemoveContainer" containerID="8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d" Jan 28 15:54:08 crc kubenswrapper[4656]: E0128 15:54:08.273216 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d\": container with ID starting with 8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d not found: ID does not exist" containerID="8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.273280 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d"} err="failed to get container status \"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d\": rpc error: code = NotFound desc = could not find container \"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d\": container with ID starting with 8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d not found: ID does not exist" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.273313 4656 scope.go:117] "RemoveContainer" containerID="90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878" Jan 28 15:54:08 crc kubenswrapper[4656]: E0128 15:54:08.273829 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878\": container with ID starting with 90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878 not found: ID does not exist" containerID="90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.273871 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878"} err="failed to get container status \"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878\": rpc error: code = NotFound desc = could not find container \"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878\": container with ID starting with 90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878 not found: ID does not exist" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.296943 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.381966 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.389976 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:54:09 crc kubenswrapper[4656]: I0128 15:54:09.181542 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57c2aaa-e624-4554-a216-411263758595" path="/var/lib/kubelet/pods/e57c2aaa-e624-4554-a216-411263758595/volumes" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.380519 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:27 crc kubenswrapper[4656]: E0128 15:54:27.381516 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="extract-utilities" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.381540 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="extract-utilities" Jan 28 15:54:27 crc kubenswrapper[4656]: E0128 15:54:27.381587 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="extract-content" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.381596 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="extract-content" Jan 28 15:54:27 crc kubenswrapper[4656]: E0128 15:54:27.381612 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="registry-server" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.381620 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="registry-server" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.381830 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="registry-server" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.387380 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.401662 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.560851 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.561037 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.561097 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.663529 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.663648 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.663700 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.664733 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.664776 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.696100 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.721981 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:28 crc kubenswrapper[4656]: I0128 15:54:28.210073 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:28 crc kubenswrapper[4656]: E0128 15:54:28.776796 4656 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb182725_ad9b_4f0d_a39d_2894dd0e9eec.slice/crio-conmon-df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb182725_ad9b_4f0d_a39d_2894dd0e9eec.slice/crio-df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:54:29 crc kubenswrapper[4656]: I0128 15:54:29.217519 4656 generic.go:334] "Generic (PLEG): container finished" podID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerID="df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087" exitCode=0 Jan 28 15:54:29 crc kubenswrapper[4656]: I0128 15:54:29.217570 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerDied","Data":"df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087"} Jan 28 15:54:29 crc kubenswrapper[4656]: I0128 15:54:29.217602 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerStarted","Data":"4f3afce81c85cdea5251cb0f237aedd7ef85ea415e43fa2fbb9a8029586bf277"} Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.369390 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rzhzn"] Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.371516 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.383790 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzhzn"] Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.513671 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-utilities\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.513965 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-catalog-content\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.514246 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxpsd\" (UniqueName: \"kubernetes.io/projected/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-kube-api-access-hxpsd\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.616365 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-utilities\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.616709 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-catalog-content\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.616864 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxpsd\" (UniqueName: \"kubernetes.io/projected/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-kube-api-access-hxpsd\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.616955 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-utilities\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.617081 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-catalog-content\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.638018 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxpsd\" (UniqueName: \"kubernetes.io/projected/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-kube-api-access-hxpsd\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.698470 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:31 crc kubenswrapper[4656]: W0128 15:54:31.393605 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod885e9bc9_ca09_4e4e_95ef_7b95d52c8dc9.slice/crio-0ca613556205ab6edb8e1954b3599b5997c08538a675d876d4aecb8bd3951d72 WatchSource:0}: Error finding container 0ca613556205ab6edb8e1954b3599b5997c08538a675d876d4aecb8bd3951d72: Status 404 returned error can't find the container with id 0ca613556205ab6edb8e1954b3599b5997c08538a675d876d4aecb8bd3951d72 Jan 28 15:54:31 crc kubenswrapper[4656]: I0128 15:54:31.401776 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzhzn"] Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.243493 4656 generic.go:334] "Generic (PLEG): container finished" podID="885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9" containerID="48098686c1686f1d4d282eec6040cca09079c828b7580f6e6566d71b1092992e" exitCode=0 Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.243556 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerDied","Data":"48098686c1686f1d4d282eec6040cca09079c828b7580f6e6566d71b1092992e"} Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.243979 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerStarted","Data":"0ca613556205ab6edb8e1954b3599b5997c08538a675d876d4aecb8bd3951d72"} Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.246982 4656 generic.go:334] "Generic (PLEG): container finished" podID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerID="2a0f04e479398ba85f28717023d2a1723ae7b04de613eda29938edd64f0ea69e" exitCode=0 Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.247031 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerDied","Data":"2a0f04e479398ba85f28717023d2a1723ae7b04de613eda29938edd64f0ea69e"} Jan 28 15:54:34 crc kubenswrapper[4656]: I0128 15:54:34.267085 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerStarted","Data":"a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25"} Jan 28 15:54:34 crc kubenswrapper[4656]: I0128 15:54:34.297029 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5zgq" podStartSLOduration=3.091862315 podStartE2EDuration="7.296999743s" podCreationTimestamp="2026-01-28 15:54:27 +0000 UTC" firstStartedPulling="2026-01-28 15:54:29.220446134 +0000 UTC m=+2159.728616938" lastFinishedPulling="2026-01-28 15:54:33.425583562 +0000 UTC m=+2163.933754366" observedRunningTime="2026-01-28 15:54:34.290003244 +0000 UTC m=+2164.798174038" watchObservedRunningTime="2026-01-28 15:54:34.296999743 +0000 UTC m=+2164.805170557" Jan 28 15:54:37 crc kubenswrapper[4656]: I0128 15:54:37.722944 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:37 crc kubenswrapper[4656]: I0128 15:54:37.723793 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:37 crc kubenswrapper[4656]: I0128 15:54:37.768365 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:38 crc kubenswrapper[4656]: I0128 15:54:38.348882 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:38 crc kubenswrapper[4656]: I0128 15:54:38.398851 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:40 crc kubenswrapper[4656]: I0128 15:54:40.329591 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5zgq" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="registry-server" containerID="cri-o://a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25" gracePeriod=2 Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.354646 4656 generic.go:334] "Generic (PLEG): container finished" podID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerID="a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25" exitCode=0 Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.354712 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerDied","Data":"a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25"} Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.704752 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.836252 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") pod \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.836293 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") pod \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.836528 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") pod \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.837090 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities" (OuterVolumeSpecName: "utilities") pod "cb182725-ad9b-4f0d-a39d-2894dd0e9eec" (UID: "cb182725-ad9b-4f0d-a39d-2894dd0e9eec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.841792 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf" (OuterVolumeSpecName: "kube-api-access-js5cf") pod "cb182725-ad9b-4f0d-a39d-2894dd0e9eec" (UID: "cb182725-ad9b-4f0d-a39d-2894dd0e9eec"). InnerVolumeSpecName "kube-api-access-js5cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.851705 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb182725-ad9b-4f0d-a39d-2894dd0e9eec" (UID: "cb182725-ad9b-4f0d-a39d-2894dd0e9eec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.938378 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.938414 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.938429 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.368936 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerDied","Data":"4f3afce81c85cdea5251cb0f237aedd7ef85ea415e43fa2fbb9a8029586bf277"} Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.369049 4656 scope.go:117] "RemoveContainer" containerID="a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25" Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.369062 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.419017 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.424145 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.836083 4656 scope.go:117] "RemoveContainer" containerID="2a0f04e479398ba85f28717023d2a1723ae7b04de613eda29938edd64f0ea69e" Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.936182 4656 scope.go:117] "RemoveContainer" containerID="df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087" Jan 28 15:54:45 crc kubenswrapper[4656]: I0128 15:54:45.180291 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" path="/var/lib/kubelet/pods/cb182725-ad9b-4f0d-a39d-2894dd0e9eec/volumes" Jan 28 15:54:45 crc kubenswrapper[4656]: I0128 15:54:45.381197 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerStarted","Data":"3bb46874ceec0f9341d3ac5399ed19efe332d91056be4c194b5ce973cec48a7e"} Jan 28 15:54:46 crc kubenswrapper[4656]: I0128 15:54:46.408004 4656 generic.go:334] "Generic (PLEG): container finished" podID="885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9" containerID="3bb46874ceec0f9341d3ac5399ed19efe332d91056be4c194b5ce973cec48a7e" exitCode=0 Jan 28 15:54:46 crc kubenswrapper[4656]: I0128 15:54:46.408057 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerDied","Data":"3bb46874ceec0f9341d3ac5399ed19efe332d91056be4c194b5ce973cec48a7e"} Jan 28 15:54:57 crc kubenswrapper[4656]: I0128 15:54:57.508500 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerStarted","Data":"d084efcc9a88194fbe3d0e7103e6b7833de09a444bca022469c59ded4ad82d3b"} Jan 28 15:54:58 crc kubenswrapper[4656]: I0128 15:54:58.540332 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rzhzn" podStartSLOduration=3.57546239 podStartE2EDuration="28.540305989s" podCreationTimestamp="2026-01-28 15:54:30 +0000 UTC" firstStartedPulling="2026-01-28 15:54:32.245665643 +0000 UTC m=+2162.753836447" lastFinishedPulling="2026-01-28 15:54:57.210509242 +0000 UTC m=+2187.718680046" observedRunningTime="2026-01-28 15:54:58.538180639 +0000 UTC m=+2189.046351443" watchObservedRunningTime="2026-01-28 15:54:58.540305989 +0000 UTC m=+2189.048476793" Jan 28 15:55:00 crc kubenswrapper[4656]: I0128 15:55:00.698899 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:55:00 crc kubenswrapper[4656]: I0128 15:55:00.699194 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:55:01 crc kubenswrapper[4656]: I0128 15:55:01.860685 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rzhzn" podUID="885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9" containerName="registry-server" probeResult="failure" output=< Jan 28 15:55:01 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:55:01 crc kubenswrapper[4656]: > Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.745591 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.796893 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.896010 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzhzn"] Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.988411 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.988932 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gbjhq" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="registry-server" containerID="cri-o://7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a" gracePeriod=2 Jan 28 15:55:11 crc kubenswrapper[4656]: I0128 15:55:11.652959 4656 generic.go:334] "Generic (PLEG): container finished" podID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerID="7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a" exitCode=0 Jan 28 15:55:11 crc kubenswrapper[4656]: I0128 15:55:11.653177 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerDied","Data":"7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a"} Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.180764 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.240969 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") pod \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.241058 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") pod \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.241150 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") pod \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.241451 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities" (OuterVolumeSpecName: "utilities") pod "ea7644c9-f50c-43f8-8165-3fa375c3b9c0" (UID: "ea7644c9-f50c-43f8-8165-3fa375c3b9c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.264347 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv" (OuterVolumeSpecName: "kube-api-access-b6bdv") pod "ea7644c9-f50c-43f8-8165-3fa375c3b9c0" (UID: "ea7644c9-f50c-43f8-8165-3fa375c3b9c0"). InnerVolumeSpecName "kube-api-access-b6bdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.342770 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.343059 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") on node \"crc\" DevicePath \"\"" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.350252 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea7644c9-f50c-43f8-8165-3fa375c3b9c0" (UID: "ea7644c9-f50c-43f8-8165-3fa375c3b9c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.444448 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.662535 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerDied","Data":"2dd598bb76a09812057b8dc1896e04e054099bec7b15c382202025f6bc1dcb53"} Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.662629 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.662640 4656 scope.go:117] "RemoveContainer" containerID="7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.689106 4656 scope.go:117] "RemoveContainer" containerID="98395829abb715f8be4df3366fdfa04876abca34d58e9c479a1d7e699b8ca848" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.698316 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.704935 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.721531 4656 scope.go:117] "RemoveContainer" containerID="df16b6a6ba81e2b7746a8de266aea33bfee9014a3f26a0a2bc636c8468b4da77" Jan 28 15:55:13 crc kubenswrapper[4656]: I0128 15:55:13.179323 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" path="/var/lib/kubelet/pods/ea7644c9-f50c-43f8-8165-3fa375c3b9c0/volumes" Jan 28 15:55:41 crc kubenswrapper[4656]: I0128 15:55:41.263816 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:55:41 crc kubenswrapper[4656]: I0128 15:55:41.264571 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:56:11 crc kubenswrapper[4656]: I0128 15:56:11.263838 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:56:11 crc kubenswrapper[4656]: I0128 15:56:11.264487 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.263798 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.264498 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.264582 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.265974 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.266076 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" gracePeriod=600 Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.706071 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" exitCode=0 Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.706147 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a"} Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.706264 4656 scope.go:117] "RemoveContainer" containerID="bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0" Jan 28 15:56:41 crc kubenswrapper[4656]: E0128 15:56:41.906356 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:56:42 crc kubenswrapper[4656]: I0128 15:56:42.719098 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:56:42 crc kubenswrapper[4656]: E0128 15:56:42.719657 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:56:54 crc kubenswrapper[4656]: I0128 15:56:54.170627 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:56:54 crc kubenswrapper[4656]: E0128 15:56:54.171543 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:57:08 crc kubenswrapper[4656]: I0128 15:57:08.170850 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:57:08 crc kubenswrapper[4656]: E0128 15:57:08.171616 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:57:21 crc kubenswrapper[4656]: I0128 15:57:21.175603 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:57:21 crc kubenswrapper[4656]: E0128 15:57:21.176667 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:57:32 crc kubenswrapper[4656]: I0128 15:57:32.171248 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:57:32 crc kubenswrapper[4656]: E0128 15:57:32.171859 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:57:45 crc kubenswrapper[4656]: I0128 15:57:45.170763 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:57:45 crc kubenswrapper[4656]: E0128 15:57:45.171678 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:00 crc kubenswrapper[4656]: I0128 15:58:00.171386 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:00 crc kubenswrapper[4656]: E0128 15:58:00.172175 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:14 crc kubenswrapper[4656]: I0128 15:58:14.171000 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:14 crc kubenswrapper[4656]: E0128 15:58:14.171996 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:26 crc kubenswrapper[4656]: I0128 15:58:26.171915 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:26 crc kubenswrapper[4656]: E0128 15:58:26.172878 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:40 crc kubenswrapper[4656]: I0128 15:58:40.171027 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:40 crc kubenswrapper[4656]: E0128 15:58:40.171784 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:54 crc kubenswrapper[4656]: I0128 15:58:54.171259 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:54 crc kubenswrapper[4656]: E0128 15:58:54.172354 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.632040 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.633200 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.633225 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.633269 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.633281 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.634564 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="extract-content" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634586 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="extract-content" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.634606 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="extract-utilities" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634615 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="extract-utilities" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.634634 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="extract-utilities" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634643 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="extract-utilities" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.634660 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="extract-content" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634668 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="extract-content" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634970 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.635069 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.636783 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.669612 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.818395 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.818514 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.818856 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.920859 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.920998 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.921031 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.921502 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.921608 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.950434 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.965866 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.643910 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:06 crc kubenswrapper[4656]: W0128 15:59:06.655629 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb90e1d72_b1d7_4ea8_bb09_a3803e370d88.slice/crio-ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd WatchSource:0}: Error finding container ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd: Status 404 returned error can't find the container with id ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.957404 4656 generic.go:334] "Generic (PLEG): container finished" podID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerID="515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2" exitCode=0 Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.957482 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerDied","Data":"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2"} Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.957761 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerStarted","Data":"ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd"} Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.959574 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:59:07 crc kubenswrapper[4656]: I0128 15:59:07.171227 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:07 crc kubenswrapper[4656]: E0128 15:59:07.171540 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:08 crc kubenswrapper[4656]: I0128 15:59:08.979949 4656 generic.go:334] "Generic (PLEG): container finished" podID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerID="dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe" exitCode=0 Jan 28 15:59:08 crc kubenswrapper[4656]: I0128 15:59:08.980258 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerDied","Data":"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe"} Jan 28 15:59:10 crc kubenswrapper[4656]: I0128 15:59:10.998339 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerStarted","Data":"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd"} Jan 28 15:59:11 crc kubenswrapper[4656]: I0128 15:59:11.031878 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ghkdq" podStartSLOduration=3.177657461 podStartE2EDuration="6.031834411s" podCreationTimestamp="2026-01-28 15:59:05 +0000 UTC" firstStartedPulling="2026-01-28 15:59:06.959223922 +0000 UTC m=+2437.467394726" lastFinishedPulling="2026-01-28 15:59:09.813400872 +0000 UTC m=+2440.321571676" observedRunningTime="2026-01-28 15:59:11.021120086 +0000 UTC m=+2441.529290890" watchObservedRunningTime="2026-01-28 15:59:11.031834411 +0000 UTC m=+2441.540005225" Jan 28 15:59:15 crc kubenswrapper[4656]: I0128 15:59:15.966877 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:15 crc kubenswrapper[4656]: I0128 15:59:15.968903 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:16 crc kubenswrapper[4656]: I0128 15:59:16.015393 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:16 crc kubenswrapper[4656]: I0128 15:59:16.112958 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.016991 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.017628 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ghkdq" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="registry-server" containerID="cri-o://ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" gracePeriod=2 Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.570464 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.703988 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") pod \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.704229 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") pod \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.704309 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") pod \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.705073 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities" (OuterVolumeSpecName: "utilities") pod "b90e1d72-b1d7-4ea8-bb09-a3803e370d88" (UID: "b90e1d72-b1d7-4ea8-bb09-a3803e370d88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.710010 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2" (OuterVolumeSpecName: "kube-api-access-4xvr2") pod "b90e1d72-b1d7-4ea8-bb09-a3803e370d88" (UID: "b90e1d72-b1d7-4ea8-bb09-a3803e370d88"). InnerVolumeSpecName "kube-api-access-4xvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.759979 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b90e1d72-b1d7-4ea8-bb09-a3803e370d88" (UID: "b90e1d72-b1d7-4ea8-bb09-a3803e370d88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.806835 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.807098 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.807227 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099225 4656 generic.go:334] "Generic (PLEG): container finished" podID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerID="ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" exitCode=0 Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099274 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerDied","Data":"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd"} Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099310 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerDied","Data":"ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd"} Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099330 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099355 4656 scope.go:117] "RemoveContainer" containerID="ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.122700 4656 scope.go:117] "RemoveContainer" containerID="dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.148417 4656 scope.go:117] "RemoveContainer" containerID="515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.151534 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.160925 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.176268 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:21 crc kubenswrapper[4656]: E0128 15:59:21.176555 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.183684 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" path="/var/lib/kubelet/pods/b90e1d72-b1d7-4ea8-bb09-a3803e370d88/volumes" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.188790 4656 scope.go:117] "RemoveContainer" containerID="ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" Jan 28 15:59:21 crc kubenswrapper[4656]: E0128 15:59:21.189476 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd\": container with ID starting with ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd not found: ID does not exist" containerID="ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.189554 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd"} err="failed to get container status \"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd\": rpc error: code = NotFound desc = could not find container \"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd\": container with ID starting with ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd not found: ID does not exist" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.189613 4656 scope.go:117] "RemoveContainer" containerID="dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe" Jan 28 15:59:21 crc kubenswrapper[4656]: E0128 15:59:21.189996 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe\": container with ID starting with dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe not found: ID does not exist" containerID="dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.190069 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe"} err="failed to get container status \"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe\": rpc error: code = NotFound desc = could not find container \"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe\": container with ID starting with dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe not found: ID does not exist" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.190137 4656 scope.go:117] "RemoveContainer" containerID="515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2" Jan 28 15:59:21 crc kubenswrapper[4656]: E0128 15:59:21.190529 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2\": container with ID starting with 515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2 not found: ID does not exist" containerID="515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.190632 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2"} err="failed to get container status \"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2\": rpc error: code = NotFound desc = could not find container \"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2\": container with ID starting with 515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2 not found: ID does not exist" Jan 28 15:59:36 crc kubenswrapper[4656]: I0128 15:59:36.171272 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:36 crc kubenswrapper[4656]: E0128 15:59:36.172214 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:48 crc kubenswrapper[4656]: I0128 15:59:48.170730 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:48 crc kubenswrapper[4656]: E0128 15:59:48.171499 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:59 crc kubenswrapper[4656]: I0128 15:59:59.171037 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:59 crc kubenswrapper[4656]: E0128 15:59:59.171752 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.156343 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj"] Jan 28 16:00:00 crc kubenswrapper[4656]: E0128 16:00:00.157040 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="extract-utilities" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.157066 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="extract-utilities" Jan 28 16:00:00 crc kubenswrapper[4656]: E0128 16:00:00.157079 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.157084 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4656]: E0128 16:00:00.157125 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="extract-content" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.157131 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="extract-content" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.157358 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.158013 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.163485 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.165303 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.172487 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.172592 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.172665 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.183623 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj"] Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.274227 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.274640 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.274755 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.275844 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.294076 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.297119 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.482040 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.974995 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj"] Jan 28 16:00:01 crc kubenswrapper[4656]: I0128 16:00:01.459840 4656 generic.go:334] "Generic (PLEG): container finished" podID="a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" containerID="c14f95d55d05b9700f6dccf5984a4f92e913e739d483090d400925435e4b6614" exitCode=0 Jan 28 16:00:01 crc kubenswrapper[4656]: I0128 16:00:01.460003 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" event={"ID":"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c","Type":"ContainerDied","Data":"c14f95d55d05b9700f6dccf5984a4f92e913e739d483090d400925435e4b6614"} Jan 28 16:00:01 crc kubenswrapper[4656]: I0128 16:00:01.461499 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" event={"ID":"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c","Type":"ContainerStarted","Data":"6330a802e6a87da1777444fc0ea0e265505129329bad95ec85d5a46fa2ded2ba"} Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.813143 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.916915 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") pod \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.917057 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") pod \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.917153 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") pod \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.918139 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" (UID: "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.923361 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg" (OuterVolumeSpecName: "kube-api-access-jpzrg") pod "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" (UID: "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c"). InnerVolumeSpecName "kube-api-access-jpzrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.924468 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" (UID: "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.019033 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.019088 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.019098 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.480121 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" event={"ID":"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c","Type":"ContainerDied","Data":"6330a802e6a87da1777444fc0ea0e265505129329bad95ec85d5a46fa2ded2ba"} Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.480193 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6330a802e6a87da1777444fc0ea0e265505129329bad95ec85d5a46fa2ded2ba" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.480269 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.932865 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.939215 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 16:00:05 crc kubenswrapper[4656]: I0128 16:00:05.180841 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25578c16-69f7-48c0-8a44-040950b9b8a1" path="/var/lib/kubelet/pods/25578c16-69f7-48c0-8a44-040950b9b8a1/volumes" Jan 28 16:00:14 crc kubenswrapper[4656]: I0128 16:00:14.170442 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:00:14 crc kubenswrapper[4656]: E0128 16:00:14.171422 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:00:25 crc kubenswrapper[4656]: I0128 16:00:25.171559 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:00:25 crc kubenswrapper[4656]: E0128 16:00:25.172301 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:00:33 crc kubenswrapper[4656]: I0128 16:00:33.558189 4656 scope.go:117] "RemoveContainer" containerID="8b5651c9577faa4a2a76aaa78d0b3751ec036612cc4cb26724879ce32d32d8f2" Jan 28 16:00:38 crc kubenswrapper[4656]: I0128 16:00:38.170329 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:00:38 crc kubenswrapper[4656]: E0128 16:00:38.171107 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:00:50 crc kubenswrapper[4656]: I0128 16:00:50.171001 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:00:50 crc kubenswrapper[4656]: E0128 16:00:50.171999 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:01:02 crc kubenswrapper[4656]: I0128 16:01:02.171250 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:01:02 crc kubenswrapper[4656]: E0128 16:01:02.172282 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:01:17 crc kubenswrapper[4656]: I0128 16:01:17.171821 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:01:17 crc kubenswrapper[4656]: E0128 16:01:17.172692 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:01:32 crc kubenswrapper[4656]: I0128 16:01:32.171545 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:01:32 crc kubenswrapper[4656]: E0128 16:01:32.172420 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:01:43 crc kubenswrapper[4656]: I0128 16:01:43.170350 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:01:44 crc kubenswrapper[4656]: I0128 16:01:44.295084 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8"} Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.500984 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:08 crc kubenswrapper[4656]: E0128 16:04:08.503838 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" containerName="collect-profiles" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.503965 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" containerName="collect-profiles" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.504545 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" containerName="collect-profiles" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.512700 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.531263 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.686049 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.686583 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.686622 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.788506 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.788877 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.788966 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.789106 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.789400 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.818006 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.857201 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:09 crc kubenswrapper[4656]: I0128 16:04:09.545447 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:09 crc kubenswrapper[4656]: I0128 16:04:09.732384 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerStarted","Data":"3e0008c7f5d57a93410fc946402619129d193b9d1b16135c815b5229c66c7258"} Jan 28 16:04:10 crc kubenswrapper[4656]: I0128 16:04:10.744305 4656 generic.go:334] "Generic (PLEG): container finished" podID="bfad1826-aa22-4f74-8370-00cf5c512680" containerID="905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758" exitCode=0 Jan 28 16:04:10 crc kubenswrapper[4656]: I0128 16:04:10.744353 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerDied","Data":"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758"} Jan 28 16:04:10 crc kubenswrapper[4656]: I0128 16:04:10.746966 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:04:11 crc kubenswrapper[4656]: I0128 16:04:11.264349 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:04:11 crc kubenswrapper[4656]: I0128 16:04:11.264697 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:04:12 crc kubenswrapper[4656]: I0128 16:04:12.760244 4656 generic.go:334] "Generic (PLEG): container finished" podID="bfad1826-aa22-4f74-8370-00cf5c512680" containerID="61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb" exitCode=0 Jan 28 16:04:12 crc kubenswrapper[4656]: I0128 16:04:12.760389 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerDied","Data":"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb"} Jan 28 16:04:13 crc kubenswrapper[4656]: I0128 16:04:13.772590 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerStarted","Data":"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c"} Jan 28 16:04:13 crc kubenswrapper[4656]: I0128 16:04:13.799109 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ptqg4" podStartSLOduration=3.096495383 podStartE2EDuration="5.799085758s" podCreationTimestamp="2026-01-28 16:04:08 +0000 UTC" firstStartedPulling="2026-01-28 16:04:10.746633348 +0000 UTC m=+2741.254804152" lastFinishedPulling="2026-01-28 16:04:13.449223713 +0000 UTC m=+2743.957394527" observedRunningTime="2026-01-28 16:04:13.792560371 +0000 UTC m=+2744.300731185" watchObservedRunningTime="2026-01-28 16:04:13.799085758 +0000 UTC m=+2744.307256562" Jan 28 16:04:18 crc kubenswrapper[4656]: I0128 16:04:18.858158 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:18 crc kubenswrapper[4656]: I0128 16:04:18.860205 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:18 crc kubenswrapper[4656]: I0128 16:04:18.905647 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:19 crc kubenswrapper[4656]: I0128 16:04:19.876457 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:19 crc kubenswrapper[4656]: I0128 16:04:19.930333 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:21 crc kubenswrapper[4656]: I0128 16:04:21.835211 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ptqg4" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="registry-server" containerID="cri-o://08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" gracePeriod=2 Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.507657 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.686394 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") pod \"bfad1826-aa22-4f74-8370-00cf5c512680\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.686491 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") pod \"bfad1826-aa22-4f74-8370-00cf5c512680\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.686529 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") pod \"bfad1826-aa22-4f74-8370-00cf5c512680\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.687428 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities" (OuterVolumeSpecName: "utilities") pod "bfad1826-aa22-4f74-8370-00cf5c512680" (UID: "bfad1826-aa22-4f74-8370-00cf5c512680"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.699945 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h" (OuterVolumeSpecName: "kube-api-access-zfw7h") pod "bfad1826-aa22-4f74-8370-00cf5c512680" (UID: "bfad1826-aa22-4f74-8370-00cf5c512680"). InnerVolumeSpecName "kube-api-access-zfw7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.742893 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfad1826-aa22-4f74-8370-00cf5c512680" (UID: "bfad1826-aa22-4f74-8370-00cf5c512680"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.788392 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.788438 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.788457 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843634 4656 generic.go:334] "Generic (PLEG): container finished" podID="bfad1826-aa22-4f74-8370-00cf5c512680" containerID="08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" exitCode=0 Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843691 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerDied","Data":"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c"} Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843738 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerDied","Data":"3e0008c7f5d57a93410fc946402619129d193b9d1b16135c815b5229c66c7258"} Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843794 4656 scope.go:117] "RemoveContainer" containerID="08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843977 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.867874 4656 scope.go:117] "RemoveContainer" containerID="61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.886024 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.909788 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.917736 4656 scope.go:117] "RemoveContainer" containerID="905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.946691 4656 scope.go:117] "RemoveContainer" containerID="08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" Jan 28 16:04:22 crc kubenswrapper[4656]: E0128 16:04:22.947288 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c\": container with ID starting with 08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c not found: ID does not exist" containerID="08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.947335 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c"} err="failed to get container status \"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c\": rpc error: code = NotFound desc = could not find container \"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c\": container with ID starting with 08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c not found: ID does not exist" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.947360 4656 scope.go:117] "RemoveContainer" containerID="61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb" Jan 28 16:04:22 crc kubenswrapper[4656]: E0128 16:04:22.947719 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb\": container with ID starting with 61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb not found: ID does not exist" containerID="61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.947760 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb"} err="failed to get container status \"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb\": rpc error: code = NotFound desc = could not find container \"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb\": container with ID starting with 61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb not found: ID does not exist" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.947791 4656 scope.go:117] "RemoveContainer" containerID="905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758" Jan 28 16:04:22 crc kubenswrapper[4656]: E0128 16:04:22.948119 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758\": container with ID starting with 905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758 not found: ID does not exist" containerID="905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.948147 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758"} err="failed to get container status \"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758\": rpc error: code = NotFound desc = could not find container \"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758\": container with ID starting with 905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758 not found: ID does not exist" Jan 28 16:04:23 crc kubenswrapper[4656]: I0128 16:04:23.183476 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" path="/var/lib/kubelet/pods/bfad1826-aa22-4f74-8370-00cf5c512680/volumes" Jan 28 16:04:41 crc kubenswrapper[4656]: I0128 16:04:41.264119 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:04:41 crc kubenswrapper[4656]: I0128 16:04:41.264728 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.263923 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.265731 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.265947 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.266843 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.267029 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8" gracePeriod=600 Jan 28 16:05:12 crc kubenswrapper[4656]: I0128 16:05:12.247141 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8" exitCode=0 Jan 28 16:05:12 crc kubenswrapper[4656]: I0128 16:05:12.247169 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8"} Jan 28 16:05:12 crc kubenswrapper[4656]: I0128 16:05:12.247674 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:05:13 crc kubenswrapper[4656]: I0128 16:05:13.258572 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c"} Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653182 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:05:16 crc kubenswrapper[4656]: E0128 16:05:16.653686 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="registry-server" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653709 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="registry-server" Jan 28 16:05:16 crc kubenswrapper[4656]: E0128 16:05:16.653736 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="extract-content" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653743 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="extract-content" Jan 28 16:05:16 crc kubenswrapper[4656]: E0128 16:05:16.653756 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="extract-utilities" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653765 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="extract-utilities" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653980 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="registry-server" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.655451 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.667530 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.762731 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.762886 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.762955 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.864704 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.864801 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.864829 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.865515 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.865832 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.891368 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.982290 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:17 crc kubenswrapper[4656]: I0128 16:05:17.593860 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:05:18 crc kubenswrapper[4656]: I0128 16:05:18.298046 4656 generic.go:334] "Generic (PLEG): container finished" podID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerID="25707c1b7aa34daf8d1d203593fed712625cdedcf4f7a415d31a120483a3edfb" exitCode=0 Jan 28 16:05:18 crc kubenswrapper[4656]: I0128 16:05:18.298096 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerDied","Data":"25707c1b7aa34daf8d1d203593fed712625cdedcf4f7a415d31a120483a3edfb"} Jan 28 16:05:18 crc kubenswrapper[4656]: I0128 16:05:18.298147 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerStarted","Data":"f915ffe410661603fd5d6938c4d9be3708344da110936f2eccefe21a78fdbbad"} Jan 28 16:05:20 crc kubenswrapper[4656]: I0128 16:05:20.313847 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerStarted","Data":"38f722e1bfe839109981c0f0f0db4eff3febe78c06d24b651cd5373d7bc12663"} Jan 28 16:05:21 crc kubenswrapper[4656]: I0128 16:05:21.325217 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerDied","Data":"38f722e1bfe839109981c0f0f0db4eff3febe78c06d24b651cd5373d7bc12663"} Jan 28 16:05:21 crc kubenswrapper[4656]: I0128 16:05:21.325141 4656 generic.go:334] "Generic (PLEG): container finished" podID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerID="38f722e1bfe839109981c0f0f0db4eff3febe78c06d24b651cd5373d7bc12663" exitCode=0 Jan 28 16:05:29 crc kubenswrapper[4656]: I0128 16:05:29.417603 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerStarted","Data":"1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080"} Jan 28 16:05:29 crc kubenswrapper[4656]: I0128 16:05:29.441547 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-grfxk" podStartSLOduration=2.835280146 podStartE2EDuration="13.441530099s" podCreationTimestamp="2026-01-28 16:05:16 +0000 UTC" firstStartedPulling="2026-01-28 16:05:18.299811124 +0000 UTC m=+2808.807981928" lastFinishedPulling="2026-01-28 16:05:28.906061077 +0000 UTC m=+2819.414231881" observedRunningTime="2026-01-28 16:05:29.440932822 +0000 UTC m=+2819.949103636" watchObservedRunningTime="2026-01-28 16:05:29.441530099 +0000 UTC m=+2819.949700903" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.701326 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.704351 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.711043 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.891760 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.891936 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.892095 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993205 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993271 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993365 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993909 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993906 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:35 crc kubenswrapper[4656]: I0128 16:05:35.011050 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:35 crc kubenswrapper[4656]: I0128 16:05:35.021733 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:35 crc kubenswrapper[4656]: I0128 16:05:35.464690 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.470600 4656 generic.go:334] "Generic (PLEG): container finished" podID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerID="0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e" exitCode=0 Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.470663 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerDied","Data":"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e"} Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.470705 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerStarted","Data":"02e879378d671726794bd95864aec6b2f37302998154d98cac6b76633c12877a"} Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.984939 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.985281 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:38 crc kubenswrapper[4656]: I0128 16:05:38.033829 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:05:38 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:05:38 crc kubenswrapper[4656]: > Jan 28 16:05:38 crc kubenswrapper[4656]: I0128 16:05:38.487503 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerStarted","Data":"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d"} Jan 28 16:05:39 crc kubenswrapper[4656]: I0128 16:05:39.565241 4656 generic.go:334] "Generic (PLEG): container finished" podID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerID="7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d" exitCode=0 Jan 28 16:05:39 crc kubenswrapper[4656]: I0128 16:05:39.565336 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerDied","Data":"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d"} Jan 28 16:05:41 crc kubenswrapper[4656]: I0128 16:05:41.591392 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerStarted","Data":"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67"} Jan 28 16:05:41 crc kubenswrapper[4656]: I0128 16:05:41.619348 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bq6jw" podStartSLOduration=2.828878261 podStartE2EDuration="7.619244708s" podCreationTimestamp="2026-01-28 16:05:34 +0000 UTC" firstStartedPulling="2026-01-28 16:05:36.472662515 +0000 UTC m=+2826.980833319" lastFinishedPulling="2026-01-28 16:05:41.263028962 +0000 UTC m=+2831.771199766" observedRunningTime="2026-01-28 16:05:41.618305611 +0000 UTC m=+2832.126476435" watchObservedRunningTime="2026-01-28 16:05:41.619244708 +0000 UTC m=+2832.127415512" Jan 28 16:05:45 crc kubenswrapper[4656]: I0128 16:05:45.022812 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:45 crc kubenswrapper[4656]: I0128 16:05:45.023363 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:45 crc kubenswrapper[4656]: I0128 16:05:45.064241 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:48 crc kubenswrapper[4656]: I0128 16:05:48.024535 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:05:48 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:05:48 crc kubenswrapper[4656]: > Jan 28 16:05:55 crc kubenswrapper[4656]: I0128 16:05:55.067403 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:55 crc kubenswrapper[4656]: I0128 16:05:55.122327 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:55 crc kubenswrapper[4656]: I0128 16:05:55.699117 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bq6jw" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="registry-server" containerID="cri-o://255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" gracePeriod=2 Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.170929 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.329358 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") pod \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.329769 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") pod \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.329930 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") pod \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.335336 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities" (OuterVolumeSpecName: "utilities") pod "8c42606f-0266-40c9-a1b1-9e83f56f6ff9" (UID: "8c42606f-0266-40c9-a1b1-9e83f56f6ff9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.339155 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8" (OuterVolumeSpecName: "kube-api-access-h7js8") pod "8c42606f-0266-40c9-a1b1-9e83f56f6ff9" (UID: "8c42606f-0266-40c9-a1b1-9e83f56f6ff9"). InnerVolumeSpecName "kube-api-access-h7js8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.352576 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c42606f-0266-40c9-a1b1-9e83f56f6ff9" (UID: "8c42606f-0266-40c9-a1b1-9e83f56f6ff9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.431523 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.431586 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.431611 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775559 4656 generic.go:334] "Generic (PLEG): container finished" podID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerID="255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" exitCode=0 Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775624 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerDied","Data":"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67"} Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775678 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775695 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerDied","Data":"02e879378d671726794bd95864aec6b2f37302998154d98cac6b76633c12877a"} Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775798 4656 scope.go:117] "RemoveContainer" containerID="255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.831044 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.839368 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.841900 4656 scope.go:117] "RemoveContainer" containerID="7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.868360 4656 scope.go:117] "RemoveContainer" containerID="0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.901644 4656 scope.go:117] "RemoveContainer" containerID="255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" Jan 28 16:05:56 crc kubenswrapper[4656]: E0128 16:05:56.902137 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67\": container with ID starting with 255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67 not found: ID does not exist" containerID="255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902247 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67"} err="failed to get container status \"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67\": rpc error: code = NotFound desc = could not find container \"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67\": container with ID starting with 255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67 not found: ID does not exist" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902282 4656 scope.go:117] "RemoveContainer" containerID="7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d" Jan 28 16:05:56 crc kubenswrapper[4656]: E0128 16:05:56.902587 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d\": container with ID starting with 7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d not found: ID does not exist" containerID="7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902610 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d"} err="failed to get container status \"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d\": rpc error: code = NotFound desc = could not find container \"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d\": container with ID starting with 7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d not found: ID does not exist" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902623 4656 scope.go:117] "RemoveContainer" containerID="0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e" Jan 28 16:05:56 crc kubenswrapper[4656]: E0128 16:05:56.902837 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e\": container with ID starting with 0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e not found: ID does not exist" containerID="0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902863 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e"} err="failed to get container status \"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e\": rpc error: code = NotFound desc = could not find container \"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e\": container with ID starting with 0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e not found: ID does not exist" Jan 28 16:05:57 crc kubenswrapper[4656]: I0128 16:05:57.181584 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" path="/var/lib/kubelet/pods/8c42606f-0266-40c9-a1b1-9e83f56f6ff9/volumes" Jan 28 16:05:58 crc kubenswrapper[4656]: I0128 16:05:58.035635 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:05:58 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:05:58 crc kubenswrapper[4656]: > Jan 28 16:06:08 crc kubenswrapper[4656]: I0128 16:06:08.025544 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:06:08 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:06:08 crc kubenswrapper[4656]: > Jan 28 16:06:18 crc kubenswrapper[4656]: I0128 16:06:18.034373 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:06:18 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:06:18 crc kubenswrapper[4656]: > Jan 28 16:06:27 crc kubenswrapper[4656]: I0128 16:06:27.031139 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:06:27 crc kubenswrapper[4656]: I0128 16:06:27.085638 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:06:27 crc kubenswrapper[4656]: I0128 16:06:27.278887 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:06:28 crc kubenswrapper[4656]: I0128 16:06:28.150671 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" containerID="cri-o://1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080" gracePeriod=2 Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.160653 4656 generic.go:334] "Generic (PLEG): container finished" podID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerID="1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080" exitCode=0 Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.160856 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerDied","Data":"1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080"} Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.161059 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerDied","Data":"f915ffe410661603fd5d6938c4d9be3708344da110936f2eccefe21a78fdbbad"} Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.161104 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f915ffe410661603fd5d6938c4d9be3708344da110936f2eccefe21a78fdbbad" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.184382 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.244356 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") pod \"b576d819-1f74-45a6-a12f-8cb6f88900a2\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.244454 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") pod \"b576d819-1f74-45a6-a12f-8cb6f88900a2\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.244495 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") pod \"b576d819-1f74-45a6-a12f-8cb6f88900a2\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.245891 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities" (OuterVolumeSpecName: "utilities") pod "b576d819-1f74-45a6-a12f-8cb6f88900a2" (UID: "b576d819-1f74-45a6-a12f-8cb6f88900a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.251465 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk" (OuterVolumeSpecName: "kube-api-access-gn2jk") pod "b576d819-1f74-45a6-a12f-8cb6f88900a2" (UID: "b576d819-1f74-45a6-a12f-8cb6f88900a2"). InnerVolumeSpecName "kube-api-access-gn2jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.347344 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.347388 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.360300 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b576d819-1f74-45a6-a12f-8cb6f88900a2" (UID: "b576d819-1f74-45a6-a12f-8cb6f88900a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.449131 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:30 crc kubenswrapper[4656]: I0128 16:06:30.170645 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:06:30 crc kubenswrapper[4656]: I0128 16:06:30.209920 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:06:30 crc kubenswrapper[4656]: I0128 16:06:30.229579 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:06:31 crc kubenswrapper[4656]: I0128 16:06:31.181996 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" path="/var/lib/kubelet/pods/b576d819-1f74-45a6-a12f-8cb6f88900a2/volumes" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.198312 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199424 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="extract-content" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199450 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="extract-content" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199474 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="extract-content" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199481 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="extract-content" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199495 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="extract-utilities" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199503 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="extract-utilities" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199527 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199537 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199553 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199560 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199572 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="extract-utilities" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199581 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="extract-utilities" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199845 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199877 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.201052 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.214594 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5j5mh"/"openshift-service-ca.crt" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.224344 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5j5mh"/"kube-root-ca.crt" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.312848 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.312942 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.343370 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.423326 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.423384 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.423867 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.456138 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.525652 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:25 crc kubenswrapper[4656]: I0128 16:07:25.076507 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:07:25 crc kubenswrapper[4656]: W0128 16:07:25.095660 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c1ba1d6_2c0d_4263_a2ae_a744f60f89b9.slice/crio-e68b2490cb51a7d99ed49ed4724ef0b0282616f5a2b4dc83da0d0d6727a62f4f WatchSource:0}: Error finding container e68b2490cb51a7d99ed49ed4724ef0b0282616f5a2b4dc83da0d0d6727a62f4f: Status 404 returned error can't find the container with id e68b2490cb51a7d99ed49ed4724ef0b0282616f5a2b4dc83da0d0d6727a62f4f Jan 28 16:07:25 crc kubenswrapper[4656]: I0128 16:07:25.882139 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/must-gather-llsvb" event={"ID":"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9","Type":"ContainerStarted","Data":"e68b2490cb51a7d99ed49ed4724ef0b0282616f5a2b4dc83da0d0d6727a62f4f"} Jan 28 16:07:34 crc kubenswrapper[4656]: I0128 16:07:34.962962 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/must-gather-llsvb" event={"ID":"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9","Type":"ContainerStarted","Data":"f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea"} Jan 28 16:07:34 crc kubenswrapper[4656]: I0128 16:07:34.963573 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/must-gather-llsvb" event={"ID":"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9","Type":"ContainerStarted","Data":"6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b"} Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.353098 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mp6g6"] Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.354883 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.358829 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5j5mh"/"default-dockercfg-5hbgb" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.469837 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.470137 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.572007 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.572118 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.572336 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.592510 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.675330 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: W0128 16:07:35.717316 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a9db18e_2297_4236_b517_de3b26ead9a1.slice/crio-9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a WatchSource:0}: Error finding container 9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a: Status 404 returned error can't find the container with id 9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.975078 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" event={"ID":"4a9db18e-2297-4236-b517-de3b26ead9a1","Type":"ContainerStarted","Data":"9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a"} Jan 28 16:07:36 crc kubenswrapper[4656]: I0128 16:07:36.002781 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5j5mh/must-gather-llsvb" podStartSLOduration=2.713314524 podStartE2EDuration="12.002751781s" podCreationTimestamp="2026-01-28 16:07:24 +0000 UTC" firstStartedPulling="2026-01-28 16:07:25.120622619 +0000 UTC m=+2935.628793423" lastFinishedPulling="2026-01-28 16:07:34.410059876 +0000 UTC m=+2944.918230680" observedRunningTime="2026-01-28 16:07:35.994064822 +0000 UTC m=+2946.502235636" watchObservedRunningTime="2026-01-28 16:07:36.002751781 +0000 UTC m=+2946.510922585" Jan 28 16:07:41 crc kubenswrapper[4656]: I0128 16:07:41.264070 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:07:41 crc kubenswrapper[4656]: I0128 16:07:41.264833 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:07:51 crc kubenswrapper[4656]: E0128 16:07:51.165343 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 28 16:07:51 crc kubenswrapper[4656]: E0128 16:07:51.167611 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-mp6g6_openshift-must-gather-5j5mh(4a9db18e-2297-4236-b517-de3b26ead9a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:07:51 crc kubenswrapper[4656]: E0128 16:07:51.169093 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" Jan 28 16:07:52 crc kubenswrapper[4656]: E0128 16:07:52.146264 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" Jan 28 16:08:06 crc kubenswrapper[4656]: I0128 16:08:06.339983 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" event={"ID":"4a9db18e-2297-4236-b517-de3b26ead9a1","Type":"ContainerStarted","Data":"56f14a3cf7039489472a1e9af12718c79dc3de07799d984f8c3f92c6f2afe477"} Jan 28 16:08:11 crc kubenswrapper[4656]: I0128 16:08:11.264086 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:08:11 crc kubenswrapper[4656]: I0128 16:08:11.265946 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:08:28 crc kubenswrapper[4656]: I0128 16:08:28.655162 4656 generic.go:334] "Generic (PLEG): container finished" podID="4a9db18e-2297-4236-b517-de3b26ead9a1" containerID="56f14a3cf7039489472a1e9af12718c79dc3de07799d984f8c3f92c6f2afe477" exitCode=0 Jan 28 16:08:28 crc kubenswrapper[4656]: I0128 16:08:28.655318 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" event={"ID":"4a9db18e-2297-4236-b517-de3b26ead9a1","Type":"ContainerDied","Data":"56f14a3cf7039489472a1e9af12718c79dc3de07799d984f8c3f92c6f2afe477"} Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.772180 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.825398 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mp6g6"] Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.843681 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mp6g6"] Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.938472 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") pod \"4a9db18e-2297-4236-b517-de3b26ead9a1\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.938694 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") pod \"4a9db18e-2297-4236-b517-de3b26ead9a1\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.939098 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host" (OuterVolumeSpecName: "host") pod "4a9db18e-2297-4236-b517-de3b26ead9a1" (UID: "4a9db18e-2297-4236-b517-de3b26ead9a1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.948450 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459" (OuterVolumeSpecName: "kube-api-access-bk459") pod "4a9db18e-2297-4236-b517-de3b26ead9a1" (UID: "4a9db18e-2297-4236-b517-de3b26ead9a1"). InnerVolumeSpecName "kube-api-access-bk459". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:30 crc kubenswrapper[4656]: I0128 16:08:30.040760 4656 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:30 crc kubenswrapper[4656]: I0128 16:08:30.041075 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:30 crc kubenswrapper[4656]: I0128 16:08:30.680655 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a" Jan 28 16:08:30 crc kubenswrapper[4656]: I0128 16:08:30.680737 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.122283 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mt444"] Jan 28 16:08:31 crc kubenswrapper[4656]: E0128 16:08:31.122885 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" containerName="container-00" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.122904 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" containerName="container-00" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.123189 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" containerName="container-00" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.124113 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.127972 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5j5mh"/"default-dockercfg-5hbgb" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.205571 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" path="/var/lib/kubelet/pods/4a9db18e-2297-4236-b517-de3b26ead9a1/volumes" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.283146 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.283298 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.384632 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.384774 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.384886 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.420300 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.450597 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.699944 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mt444" event={"ID":"4db035db-0355-474f-8b0a-901729dd209b","Type":"ContainerStarted","Data":"446b08d4835c835ff17f9da2e83f5afa02d465d3ecd7cd2efc5548f3f6362321"} Jan 28 16:08:32 crc kubenswrapper[4656]: I0128 16:08:32.710143 4656 generic.go:334] "Generic (PLEG): container finished" podID="4db035db-0355-474f-8b0a-901729dd209b" containerID="fbc64e0a3e9a4ebf2ebec5c723e100b57156e1f3139bab5f6fa4f521a05d3cd1" exitCode=1 Jan 28 16:08:32 crc kubenswrapper[4656]: I0128 16:08:32.710244 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mt444" event={"ID":"4db035db-0355-474f-8b0a-901729dd209b","Type":"ContainerDied","Data":"fbc64e0a3e9a4ebf2ebec5c723e100b57156e1f3139bab5f6fa4f521a05d3cd1"} Jan 28 16:08:32 crc kubenswrapper[4656]: I0128 16:08:32.767854 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mt444"] Jan 28 16:08:32 crc kubenswrapper[4656]: I0128 16:08:32.777905 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mt444"] Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.816736 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.857702 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") pod \"4db035db-0355-474f-8b0a-901729dd209b\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.878123 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8" (OuterVolumeSpecName: "kube-api-access-l8pr8") pod "4db035db-0355-474f-8b0a-901729dd209b" (UID: "4db035db-0355-474f-8b0a-901729dd209b"). InnerVolumeSpecName "kube-api-access-l8pr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.959205 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") pod \"4db035db-0355-474f-8b0a-901729dd209b\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.959680 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.959720 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host" (OuterVolumeSpecName: "host") pod "4db035db-0355-474f-8b0a-901729dd209b" (UID: "4db035db-0355-474f-8b0a-901729dd209b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:08:34 crc kubenswrapper[4656]: I0128 16:08:34.061340 4656 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:34 crc kubenswrapper[4656]: I0128 16:08:34.734792 4656 scope.go:117] "RemoveContainer" containerID="fbc64e0a3e9a4ebf2ebec5c723e100b57156e1f3139bab5f6fa4f521a05d3cd1" Jan 28 16:08:34 crc kubenswrapper[4656]: I0128 16:08:34.734811 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:35 crc kubenswrapper[4656]: I0128 16:08:35.180283 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4db035db-0355-474f-8b0a-901729dd209b" path="/var/lib/kubelet/pods/4db035db-0355-474f-8b0a-901729dd209b/volumes" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.263712 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.264150 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.264211 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.264866 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.264919 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" gracePeriod=600 Jan 28 16:08:41 crc kubenswrapper[4656]: E0128 16:08:41.583856 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.820827 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" exitCode=0 Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.820878 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c"} Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.820916 4656 scope.go:117] "RemoveContainer" containerID="11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.821817 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:08:41 crc kubenswrapper[4656]: E0128 16:08:41.822223 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:08:46 crc kubenswrapper[4656]: I0128 16:08:46.983655 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74f6bcbc87-8fng8_0d7bd9aa-43b5-4819-9bef-a61574670ba6/init/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.140870 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74f6bcbc87-8fng8_0d7bd9aa-43b5-4819-9bef-a61574670ba6/init/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.257780 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7eff8e04-7afc-4a92-998f-db692ece65e7/kube-state-metrics/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.272801 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74f6bcbc87-8fng8_0d7bd9aa-43b5-4819-9bef-a61574670ba6/dnsmasq-dns/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.603244 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f25455cb-6f99-4958-b7bd-9fa56e45f6e1/memcached/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.673033 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6a46bc21-63f0-461d-b33d-ec98cb059408/mysql-bootstrap/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.881000 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6a46bc21-63f0-461d-b33d-ec98cb059408/mysql-bootstrap/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.919433 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6a46bc21-63f0-461d-b33d-ec98cb059408/galera/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.920097 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e41d89b-8943-4aec-9e33-00db569a2ce8/mysql-bootstrap/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.217653 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e41d89b-8943-4aec-9e33-00db569a2ce8/galera/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.244351 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e41d89b-8943-4aec-9e33-00db569a2ce8/mysql-bootstrap/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.255549 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7pppv_4815f130-4106-456b-9bcb-b34536d9ddc9/ovn-controller/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.485798 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vs8p5_f39f654e-78ca-44c2-8c6a-a1de43a83d3f/openstack-network-exporter/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.608002 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-28hwk_beab0392-2167-4283-97ae-12498c5d02c1/ovsdb-server-init/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.784848 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-28hwk_beab0392-2167-4283-97ae-12498c5d02c1/ovsdb-server-init/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.835894 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-28hwk_beab0392-2167-4283-97ae-12498c5d02c1/ovs-vswitchd/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.890225 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-28hwk_beab0392-2167-4283-97ae-12498c5d02c1/ovsdb-server/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.059318 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2fd90425-2113-4787-b18d-332f32cedd87/ovn-northd/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.067264 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2fd90425-2113-4787-b18d-332f32cedd87/openstack-network-exporter/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.404668 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_681fa692-9a54-4d03-a31c-952409143c4f/openstack-network-exporter/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.440154 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_681fa692-9a54-4d03-a31c-952409143c4f/ovsdbserver-nb/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.577287 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_da949f76-8013-4824-bda9-0656b43920b5/openstack-network-exporter/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.643124 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_da949f76-8013-4824-bda9-0656b43920b5/ovsdbserver-sb/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.839121 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_07f26e32-4b43-4591-9ed2-6426a96e596e/setup-container/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.034739 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_07f26e32-4b43-4591-9ed2-6426a96e596e/setup-container/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.068698 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2239f1cd-f384-40df-9f71-a46caf290038/setup-container/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.138470 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_07f26e32-4b43-4591-9ed2-6426a96e596e/rabbitmq/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.441167 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2239f1cd-f384-40df-9f71-a46caf290038/setup-container/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.450571 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2239f1cd-f384-40df-9f71-a46caf290038/rabbitmq/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.511761 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-mfbzm_5db57b48-1e29-4c73-b488-d6998232fce1/swift-ring-rebalance/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.753352 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/account-auditor/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.808122 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/account-reaper/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.847021 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/account-replicator/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.981005 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/account-server/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.090243 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/container-auditor/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.130061 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/container-server/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.147503 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/container-replicator/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.240093 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/container-updater/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.337463 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-auditor/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.390247 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-expirer/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.448568 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-replicator/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.549491 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-server/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.601914 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-updater/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.652053 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/rsync/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.733423 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/swift-recon-cron/0.log" Jan 28 16:08:57 crc kubenswrapper[4656]: I0128 16:08:57.171330 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:08:57 crc kubenswrapper[4656]: E0128 16:08:57.172384 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:10 crc kubenswrapper[4656]: I0128 16:09:10.170747 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:09:10 crc kubenswrapper[4656]: E0128 16:09:10.171734 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:14 crc kubenswrapper[4656]: I0128 16:09:14.579726 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6bc7f4f4cf-hd57q_6ce4cdbc-3227-4679-8da9-9fd537996bd7/manager/0.log" Jan 28 16:09:14 crc kubenswrapper[4656]: I0128 16:09:14.832363 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/util/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.085570 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/util/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.116925 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/pull/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.162080 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/pull/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.293810 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/util/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.358579 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/extract/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.438643 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/pull/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.563366 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-f6487bd57-jwv7f_0cf0a4ad-85dd-47df-9307-e469f075a098/manager/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.702887 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66dfbd6f5d-r8cjw_cfeab083-1268-47aa-938e-bd91036755de/manager/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.887620 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6db5dbd896-cfpjq_113ba11f-aeba-4710-b5f6-0991e9766d45/manager/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.933614 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-587c6bfdcf-xjnqt_45be18b4-f249-4c09-8875-9959686d7f8f/manager/0.log" Jan 28 16:09:16 crc kubenswrapper[4656]: I0128 16:09:16.194973 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-9q5lw_1bfa2d1e-9ab0-478a-a19d-d031a1a8a312/manager/0.log" Jan 28 16:09:16 crc kubenswrapper[4656]: I0128 16:09:16.263890 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-bfl2p_7341d49c-e9a9-4108-8a2c-bf808ccb49cf/manager/0.log" Jan 28 16:09:16 crc kubenswrapper[4656]: I0128 16:09:16.639233 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-958664b5-m9jtk_ae47e69a-49f4-4b1a-8d68-068b5e99f22a/manager/0.log" Jan 28 16:09:16 crc kubenswrapper[4656]: I0128 16:09:16.720885 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b84b46695-86ht2_a5bdaf78-b590-429f-bc9b-46c67a369456/manager/0.log" Jan 28 16:09:17 crc kubenswrapper[4656]: I0128 16:09:17.742039 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-p92zm_9277e421-df3a-49a2-81cc-86d0f7c65809/manager/0.log" Jan 28 16:09:17 crc kubenswrapper[4656]: I0128 16:09:17.786799 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-765668569f-7kctj_0a83428f-312c-4590-beb3-8da4994c8951/manager/0.log" Jan 28 16:09:18 crc kubenswrapper[4656]: I0128 16:09:18.210092 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-694c5bfc85-rjfbj_9954b0be-71f8-430b-a61f-28a95404c0f7/manager/0.log" Jan 28 16:09:18 crc kubenswrapper[4656]: I0128 16:09:18.269613 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-gqr2d_f37006c8-da19-4d17-a6d5-f4b075f2220f/manager/0.log" Jan 28 16:09:18 crc kubenswrapper[4656]: I0128 16:09:18.446031 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5c765b4558-wjspj_50db0152-72c0-4fc3-9cd5-6b2c01127341/manager/0.log" Jan 28 16:09:18 crc kubenswrapper[4656]: I0128 16:09:18.745642 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4df55nv_3dcf45d4-628c-4071-b732-8ade2d3c4b4e/manager/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.320483 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-678d9cfb88-c5xvb_ab5fdcdc-7606-4e97-a65a-c98545c1a74a/operator/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.421537 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tknhq_c65965be-4267-4c92-a9b1-046d85299b2c/registry-server/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.422992 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57d89bf95c-gltwn_010cc4f5-4ac8-46e0-be08-80218981003e/manager/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.727073 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-brxps_92d1569e-5733-4779-b9fb-7feae2ea9317/manager/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.739075 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-rmvr2_132d53b6-84ec-44d6-8f8f-762e9595919e/manager/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.981117 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-sqgs8_0bb42d6d-259a-4532-b3e2-732c0f271d9a/operator/0.log" Jan 28 16:09:20 crc kubenswrapper[4656]: I0128 16:09:20.120978 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-9q9vg_e97e04fa-1b66-4373-b31f-12089f1f5b2b/manager/0.log" Jan 28 16:09:20 crc kubenswrapper[4656]: I0128 16:09:20.329827 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-bxkwv_d903ea5b-f13e-43d5-b65b-44093c70ddee/manager/0.log" Jan 28 16:09:20 crc kubenswrapper[4656]: I0128 16:09:20.441638 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d69b9c5db-nmjz8_5888a906-8758-4179-a30f-c2244ec46072/manager/0.log" Jan 28 16:09:20 crc kubenswrapper[4656]: I0128 16:09:20.531673 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-xlrqs_36524b9c-daa2-46d2-a732-b0964bb08873/manager/0.log" Jan 28 16:09:25 crc kubenswrapper[4656]: I0128 16:09:25.171920 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:09:25 crc kubenswrapper[4656]: E0128 16:09:25.172644 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.720500 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:29 crc kubenswrapper[4656]: E0128 16:09:29.722923 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db035db-0355-474f-8b0a-901729dd209b" containerName="container-00" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.722968 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db035db-0355-474f-8b0a-901729dd209b" containerName="container-00" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.723328 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db035db-0355-474f-8b0a-901729dd209b" containerName="container-00" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.724744 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.757627 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.875321 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.875702 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.875766 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.977929 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.978003 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.978057 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.978790 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.979015 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:30 crc kubenswrapper[4656]: I0128 16:09:30.017417 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:30 crc kubenswrapper[4656]: I0128 16:09:30.051191 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:30 crc kubenswrapper[4656]: I0128 16:09:30.616621 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:31 crc kubenswrapper[4656]: I0128 16:09:31.332723 4656 generic.go:334] "Generic (PLEG): container finished" podID="d76256ac-0d22-49d2-ba57-036ac864356f" containerID="340c910a26d3bd13fb9584ce8111f10337cb97a436c25e510aece355a5a9e46f" exitCode=0 Jan 28 16:09:31 crc kubenswrapper[4656]: I0128 16:09:31.332799 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerDied","Data":"340c910a26d3bd13fb9584ce8111f10337cb97a436c25e510aece355a5a9e46f"} Jan 28 16:09:31 crc kubenswrapper[4656]: I0128 16:09:31.333043 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerStarted","Data":"1aa788840bf60cd11879a71864d4bb1a7ebaf205bb904dfe6c72a639dfb0bf8d"} Jan 28 16:09:31 crc kubenswrapper[4656]: I0128 16:09:31.336063 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:09:32 crc kubenswrapper[4656]: I0128 16:09:32.342808 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerStarted","Data":"f9d30d085cbc81d644ccb39179effeb6ce91c140b8c38d8e3a58cc5acacebd06"} Jan 28 16:09:33 crc kubenswrapper[4656]: I0128 16:09:33.353865 4656 generic.go:334] "Generic (PLEG): container finished" podID="d76256ac-0d22-49d2-ba57-036ac864356f" containerID="f9d30d085cbc81d644ccb39179effeb6ce91c140b8c38d8e3a58cc5acacebd06" exitCode=0 Jan 28 16:09:33 crc kubenswrapper[4656]: I0128 16:09:33.354040 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerDied","Data":"f9d30d085cbc81d644ccb39179effeb6ce91c140b8c38d8e3a58cc5acacebd06"} Jan 28 16:09:35 crc kubenswrapper[4656]: I0128 16:09:35.372909 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerStarted","Data":"315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724"} Jan 28 16:09:35 crc kubenswrapper[4656]: I0128 16:09:35.401367 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b8c2w" podStartSLOduration=3.31007837 podStartE2EDuration="6.40133712s" podCreationTimestamp="2026-01-28 16:09:29 +0000 UTC" firstStartedPulling="2026-01-28 16:09:31.335690816 +0000 UTC m=+3061.843861620" lastFinishedPulling="2026-01-28 16:09:34.426949566 +0000 UTC m=+3064.935120370" observedRunningTime="2026-01-28 16:09:35.400530027 +0000 UTC m=+3065.908700841" watchObservedRunningTime="2026-01-28 16:09:35.40133712 +0000 UTC m=+3065.909507914" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.052414 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.053000 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.104885 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.210107 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:09:40 crc kubenswrapper[4656]: E0128 16:09:40.210907 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.527954 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.601417 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:42 crc kubenswrapper[4656]: I0128 16:09:42.496002 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b8c2w" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="registry-server" containerID="cri-o://315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724" gracePeriod=2 Jan 28 16:09:44 crc kubenswrapper[4656]: I0128 16:09:44.528881 4656 generic.go:334] "Generic (PLEG): container finished" podID="d76256ac-0d22-49d2-ba57-036ac864356f" containerID="315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724" exitCode=0 Jan 28 16:09:44 crc kubenswrapper[4656]: I0128 16:09:44.528927 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerDied","Data":"315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724"} Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.058072 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.113127 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") pod \"d76256ac-0d22-49d2-ba57-036ac864356f\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.113288 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") pod \"d76256ac-0d22-49d2-ba57-036ac864356f\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.113350 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") pod \"d76256ac-0d22-49d2-ba57-036ac864356f\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.114341 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities" (OuterVolumeSpecName: "utilities") pod "d76256ac-0d22-49d2-ba57-036ac864356f" (UID: "d76256ac-0d22-49d2-ba57-036ac864356f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.118505 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w" (OuterVolumeSpecName: "kube-api-access-6h45w") pod "d76256ac-0d22-49d2-ba57-036ac864356f" (UID: "d76256ac-0d22-49d2-ba57-036ac864356f"). InnerVolumeSpecName "kube-api-access-6h45w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.182712 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d76256ac-0d22-49d2-ba57-036ac864356f" (UID: "d76256ac-0d22-49d2-ba57-036ac864356f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.216691 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.216728 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.216740 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.537952 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerDied","Data":"1aa788840bf60cd11879a71864d4bb1a7ebaf205bb904dfe6c72a639dfb0bf8d"} Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.538034 4656 scope.go:117] "RemoveContainer" containerID="315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.538245 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.559290 4656 scope.go:117] "RemoveContainer" containerID="f9d30d085cbc81d644ccb39179effeb6ce91c140b8c38d8e3a58cc5acacebd06" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.568020 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.580885 4656 scope.go:117] "RemoveContainer" containerID="340c910a26d3bd13fb9584ce8111f10337cb97a436c25e510aece355a5a9e46f" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.603072 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.813772 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w7bws_c50cc4de-dd25-4337-a532-3384d5a87626/control-plane-machine-set-operator/0.log" Jan 28 16:09:46 crc kubenswrapper[4656]: I0128 16:09:46.088685 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gcdpp_44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9/machine-api-operator/0.log" Jan 28 16:09:46 crc kubenswrapper[4656]: I0128 16:09:46.088764 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gcdpp_44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9/kube-rbac-proxy/0.log" Jan 28 16:09:47 crc kubenswrapper[4656]: I0128 16:09:47.182546 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" path="/var/lib/kubelet/pods/d76256ac-0d22-49d2-ba57-036ac864356f/volumes" Jan 28 16:09:55 crc kubenswrapper[4656]: I0128 16:09:55.171413 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:09:55 crc kubenswrapper[4656]: E0128 16:09:55.172316 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:59 crc kubenswrapper[4656]: I0128 16:09:59.687994 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cpqlr_b66b5cb8-91c8-4122-b61a-d2f5f7815d26/cert-manager-controller/0.log" Jan 28 16:09:59 crc kubenswrapper[4656]: I0128 16:09:59.991023 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8bd5j_46693ecf-5a40-4182-aeed-7161923e4016/cert-manager-cainjector/0.log" Jan 28 16:10:00 crc kubenswrapper[4656]: I0128 16:10:00.051416 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-jl8hn_2afeb8d3-6acc-42ee-aa3d-943c0784354c/cert-manager-webhook/0.log" Jan 28 16:10:08 crc kubenswrapper[4656]: I0128 16:10:08.171279 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:10:08 crc kubenswrapper[4656]: E0128 16:10:08.172275 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.282055 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-2bhpr_c74f938e-5184-4ea3-afe5-373ef61c779a/nmstate-console-plugin/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.513386 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-rjhhj_41fa0969-44fb-4cf4-916c-da0dd393a58c/nmstate-handler/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.582288 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xjdms_59adf6ef-2655-44f4-ae3d-91c315439598/kube-rbac-proxy/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.678676 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xjdms_59adf6ef-2655-44f4-ae3d-91c315439598/nmstate-metrics/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.760724 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-bvtmh_66b69ecf-d4cf-452c-a311-93cecb247ab1/nmstate-operator/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.880554 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rrl4x_0b2d8d4a-d2ba-4c29-b545-f23070527595/nmstate-webhook/0.log" Jan 28 16:10:22 crc kubenswrapper[4656]: I0128 16:10:22.171179 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:10:22 crc kubenswrapper[4656]: E0128 16:10:22.172004 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:10:35 crc kubenswrapper[4656]: I0128 16:10:35.171364 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:10:35 crc kubenswrapper[4656]: E0128 16:10:35.172260 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.097883 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xgjbl_a0c56151-b07f-4c02-9b4c-0b48c4dd8a03/kube-rbac-proxy/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.174697 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xgjbl_a0c56151-b07f-4c02-9b4c-0b48c4dd8a03/controller/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.426591 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-nzvcq_555938b5-7504-41e4-9331-7be899491299/frr-k8s-webhook-server/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.563387 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-frr-files/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.851258 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-metrics/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.873542 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-reloader/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.883280 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-frr-files/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.922597 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-reloader/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.043072 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-frr-files/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.154498 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-reloader/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.162553 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-metrics/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.239999 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-metrics/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.413687 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-metrics/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.472754 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-reloader/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.527279 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-frr-files/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.547781 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/controller/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.698394 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/frr-metrics/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.705305 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/kube-rbac-proxy/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.781838 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/kube-rbac-proxy-frr/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.927828 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/reloader/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.167970 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-dfcddcb8c-gjtgk_93bc850d-d691-43b6-8668-79f21bd350a7/manager/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.241591 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/frr/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.346790 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bdb79d58b-gsggf_585f1e9a-4070-4b23-bbab-f29ae7e95cf0/webhook-server/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.521692 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k4qr2_8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3/kube-rbac-proxy/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.803750 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k4qr2_8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3/speaker/0.log" Jan 28 16:10:46 crc kubenswrapper[4656]: I0128 16:10:46.171381 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:10:46 crc kubenswrapper[4656]: E0128 16:10:46.171735 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:10:56 crc kubenswrapper[4656]: I0128 16:10:56.813138 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.076631 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.104900 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.139032 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.322811 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.338517 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.348211 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/extract/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.547037 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.741080 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.748868 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.851432 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.984306 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/extract/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.989039 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.992749 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/pull/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.179289 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-utilities/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.438465 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-content/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.444657 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-utilities/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.445724 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-content/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.647113 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-content/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.663615 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-utilities/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.084672 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/registry-server/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.102320 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-utilities/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.280834 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-utilities/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.389496 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-content/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.580589 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-content/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.805821 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-content/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.811516 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-utilities/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.142858 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2xzq6_c48ec7e7-12ff-47f6-9a82-59078f7c2b04/marketplace-operator/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.173034 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:00 crc kubenswrapper[4656]: E0128 16:11:00.173301 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.282822 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-utilities/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.493302 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/registry-server/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.543794 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-utilities/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.569068 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-content/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.572439 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-content/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.859850 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-utilities/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.940681 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-content/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.996902 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/registry-server/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.148746 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-utilities/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.342848 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-content/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.346351 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-utilities/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.363758 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-content/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.570715 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-content/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.572589 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-utilities/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.745071 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/registry-server/0.log" Jan 28 16:11:12 crc kubenswrapper[4656]: I0128 16:11:12.171552 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:12 crc kubenswrapper[4656]: E0128 16:11:12.172179 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:11:25 crc kubenswrapper[4656]: I0128 16:11:25.170733 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:25 crc kubenswrapper[4656]: E0128 16:11:25.171485 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:11:33 crc kubenswrapper[4656]: I0128 16:11:33.894183 4656 scope.go:117] "RemoveContainer" containerID="25707c1b7aa34daf8d1d203593fed712625cdedcf4f7a415d31a120483a3edfb" Jan 28 16:11:33 crc kubenswrapper[4656]: I0128 16:11:33.937577 4656 scope.go:117] "RemoveContainer" containerID="1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080" Jan 28 16:11:33 crc kubenswrapper[4656]: I0128 16:11:33.973827 4656 scope.go:117] "RemoveContainer" containerID="38f722e1bfe839109981c0f0f0db4eff3febe78c06d24b651cd5373d7bc12663" Jan 28 16:11:38 crc kubenswrapper[4656]: I0128 16:11:38.171286 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:38 crc kubenswrapper[4656]: E0128 16:11:38.172330 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:11:51 crc kubenswrapper[4656]: I0128 16:11:51.180383 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:51 crc kubenswrapper[4656]: E0128 16:11:51.181217 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:02 crc kubenswrapper[4656]: I0128 16:12:02.170559 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:12:02 crc kubenswrapper[4656]: E0128 16:12:02.171566 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:17 crc kubenswrapper[4656]: I0128 16:12:17.171864 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:12:17 crc kubenswrapper[4656]: E0128 16:12:17.172662 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:32 crc kubenswrapper[4656]: I0128 16:12:32.171018 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:12:32 crc kubenswrapper[4656]: E0128 16:12:32.173433 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:43 crc kubenswrapper[4656]: I0128 16:12:43.123787 4656 generic.go:334] "Generic (PLEG): container finished" podID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerID="6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b" exitCode=0 Jan 28 16:12:43 crc kubenswrapper[4656]: I0128 16:12:43.123838 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/must-gather-llsvb" event={"ID":"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9","Type":"ContainerDied","Data":"6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b"} Jan 28 16:12:43 crc kubenswrapper[4656]: I0128 16:12:43.125182 4656 scope.go:117] "RemoveContainer" containerID="6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b" Jan 28 16:12:44 crc kubenswrapper[4656]: I0128 16:12:44.064000 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5j5mh_must-gather-llsvb_2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/gather/0.log" Jan 28 16:12:47 crc kubenswrapper[4656]: I0128 16:12:47.171132 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:12:47 crc kubenswrapper[4656]: E0128 16:12:47.171898 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:52 crc kubenswrapper[4656]: I0128 16:12:52.793507 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:12:52 crc kubenswrapper[4656]: I0128 16:12:52.794394 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5j5mh/must-gather-llsvb" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="copy" containerID="cri-o://f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea" gracePeriod=2 Jan 28 16:12:52 crc kubenswrapper[4656]: I0128 16:12:52.801014 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:12:53 crc kubenswrapper[4656]: E0128 16:12:53.032456 4656 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c1ba1d6_2c0d_4263_a2ae_a744f60f89b9.slice/crio-conmon-f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea.scope\": RecentStats: unable to find data in memory cache]" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.224879 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5j5mh_must-gather-llsvb_2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/copy/0.log" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.225527 4656 generic.go:334] "Generic (PLEG): container finished" podID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerID="f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea" exitCode=143 Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.735048 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5j5mh_must-gather-llsvb_2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/copy/0.log" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.736068 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.866813 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") pod \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.866878 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") pod \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.882590 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2" (OuterVolumeSpecName: "kube-api-access-w7nj2") pod "2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" (UID: "2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9"). InnerVolumeSpecName "kube-api-access-w7nj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.969347 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") on node \"crc\" DevicePath \"\"" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.992336 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" (UID: "2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.071145 4656 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.235756 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5j5mh_must-gather-llsvb_2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/copy/0.log" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.236141 4656 scope.go:117] "RemoveContainer" containerID="f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.236199 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.266581 4656 scope.go:117] "RemoveContainer" containerID="6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b" Jan 28 16:12:55 crc kubenswrapper[4656]: I0128 16:12:55.180765 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" path="/var/lib/kubelet/pods/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/volumes" Jan 28 16:13:01 crc kubenswrapper[4656]: I0128 16:13:01.176457 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:01 crc kubenswrapper[4656]: E0128 16:13:01.177374 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:13:14 crc kubenswrapper[4656]: I0128 16:13:14.170561 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:14 crc kubenswrapper[4656]: E0128 16:13:14.172545 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:13:26 crc kubenswrapper[4656]: I0128 16:13:26.170847 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:26 crc kubenswrapper[4656]: E0128 16:13:26.171675 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:13:37 crc kubenswrapper[4656]: I0128 16:13:37.170834 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:37 crc kubenswrapper[4656]: E0128 16:13:37.171730 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:13:48 crc kubenswrapper[4656]: I0128 16:13:48.171467 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:48 crc kubenswrapper[4656]: I0128 16:13:48.731008 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"d7999db764e2d77e39711d0b36b7eb67c612bc23b60021126111ecb7d71dcd8b"} Jan 28 16:14:34 crc kubenswrapper[4656]: I0128 16:14:34.090770 4656 scope.go:117] "RemoveContainer" containerID="56f14a3cf7039489472a1e9af12718c79dc3de07799d984f8c3f92c6f2afe477" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.164801 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997"] Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166042 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166062 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166083 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="gather" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166090 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="gather" Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166114 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166122 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166148 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="copy" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166179 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="copy" Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166197 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166204 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166464 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="copy" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166500 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="gather" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166515 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.167330 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.174896 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.175439 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.212872 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997"] Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.354268 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.354323 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.354357 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.455818 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.455893 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.455940 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.456911 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.462493 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.474962 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.511526 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:01 crc kubenswrapper[4656]: I0128 16:15:01.005977 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997"] Jan 28 16:15:01 crc kubenswrapper[4656]: W0128 16:15:01.025309 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod184f8a72_6f46_453d_8eda_d5526b224693.slice/crio-37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46 WatchSource:0}: Error finding container 37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46: Status 404 returned error can't find the container with id 37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46 Jan 28 16:15:01 crc kubenswrapper[4656]: I0128 16:15:01.503711 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" event={"ID":"184f8a72-6f46-453d-8eda-d5526b224693","Type":"ContainerStarted","Data":"abaa04e54f286aa11e362c3eecbd9be04d343a82a9f11988bf954f99d0af2ee1"} Jan 28 16:15:01 crc kubenswrapper[4656]: I0128 16:15:01.504948 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" event={"ID":"184f8a72-6f46-453d-8eda-d5526b224693","Type":"ContainerStarted","Data":"37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46"} Jan 28 16:15:01 crc kubenswrapper[4656]: I0128 16:15:01.527094 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" podStartSLOduration=1.5270641999999999 podStartE2EDuration="1.5270642s" podCreationTimestamp="2026-01-28 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:15:01.525376062 +0000 UTC m=+3392.033546866" watchObservedRunningTime="2026-01-28 16:15:01.5270642 +0000 UTC m=+3392.035235004" Jan 28 16:15:02 crc kubenswrapper[4656]: I0128 16:15:02.513997 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" event={"ID":"184f8a72-6f46-453d-8eda-d5526b224693","Type":"ContainerDied","Data":"abaa04e54f286aa11e362c3eecbd9be04d343a82a9f11988bf954f99d0af2ee1"} Jan 28 16:15:02 crc kubenswrapper[4656]: I0128 16:15:02.513950 4656 generic.go:334] "Generic (PLEG): container finished" podID="184f8a72-6f46-453d-8eda-d5526b224693" containerID="abaa04e54f286aa11e362c3eecbd9be04d343a82a9f11988bf954f99d0af2ee1" exitCode=0 Jan 28 16:15:03 crc kubenswrapper[4656]: I0128 16:15:03.856645 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.023275 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") pod \"184f8a72-6f46-453d-8eda-d5526b224693\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.023504 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") pod \"184f8a72-6f46-453d-8eda-d5526b224693\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.023566 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") pod \"184f8a72-6f46-453d-8eda-d5526b224693\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.024345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume" (OuterVolumeSpecName: "config-volume") pod "184f8a72-6f46-453d-8eda-d5526b224693" (UID: "184f8a72-6f46-453d-8eda-d5526b224693"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.031212 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t" (OuterVolumeSpecName: "kube-api-access-dqc4t") pod "184f8a72-6f46-453d-8eda-d5526b224693" (UID: "184f8a72-6f46-453d-8eda-d5526b224693"). InnerVolumeSpecName "kube-api-access-dqc4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.035978 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "184f8a72-6f46-453d-8eda-d5526b224693" (UID: "184f8a72-6f46-453d-8eda-d5526b224693"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.125400 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.125445 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.125459 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.296971 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.303351 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.529132 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" event={"ID":"184f8a72-6f46-453d-8eda-d5526b224693","Type":"ContainerDied","Data":"37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46"} Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.529223 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.529307 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:05 crc kubenswrapper[4656]: I0128 16:15:05.181372 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="647c2e48-5d47-46f5-bd41-1512da5aef27" path="/var/lib/kubelet/pods/647c2e48-5d47-46f5-bd41-1512da5aef27/volumes" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.468320 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:18 crc kubenswrapper[4656]: E0128 16:15:18.469378 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184f8a72-6f46-453d-8eda-d5526b224693" containerName="collect-profiles" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.469405 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="184f8a72-6f46-453d-8eda-d5526b224693" containerName="collect-profiles" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.469583 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="184f8a72-6f46-453d-8eda-d5526b224693" containerName="collect-profiles" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.470722 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.484736 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.563670 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.563827 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.564020 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665184 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665279 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665321 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665774 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665976 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.690475 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.788682 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:19 crc kubenswrapper[4656]: I0128 16:15:19.351884 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:19 crc kubenswrapper[4656]: I0128 16:15:19.636383 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerStarted","Data":"495b083fb6504c78ffde9f5b26f1d7ed8eed0987c7b2949f27fb691ff953481f"} Jan 28 16:15:20 crc kubenswrapper[4656]: I0128 16:15:20.646480 4656 generic.go:334] "Generic (PLEG): container finished" podID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerID="492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d" exitCode=0 Jan 28 16:15:20 crc kubenswrapper[4656]: I0128 16:15:20.646589 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerDied","Data":"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d"} Jan 28 16:15:20 crc kubenswrapper[4656]: I0128 16:15:20.649678 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:15:22 crc kubenswrapper[4656]: I0128 16:15:22.678707 4656 generic.go:334] "Generic (PLEG): container finished" podID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerID="2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b" exitCode=0 Jan 28 16:15:22 crc kubenswrapper[4656]: I0128 16:15:22.678881 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerDied","Data":"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b"} Jan 28 16:15:23 crc kubenswrapper[4656]: I0128 16:15:23.691435 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerStarted","Data":"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890"} Jan 28 16:15:23 crc kubenswrapper[4656]: I0128 16:15:23.714132 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7mnj6" podStartSLOduration=3.23639231 podStartE2EDuration="5.714106244s" podCreationTimestamp="2026-01-28 16:15:18 +0000 UTC" firstStartedPulling="2026-01-28 16:15:20.64928715 +0000 UTC m=+3411.157457954" lastFinishedPulling="2026-01-28 16:15:23.127001084 +0000 UTC m=+3413.635171888" observedRunningTime="2026-01-28 16:15:23.708895574 +0000 UTC m=+3414.217066378" watchObservedRunningTime="2026-01-28 16:15:23.714106244 +0000 UTC m=+3414.222277038" Jan 28 16:15:28 crc kubenswrapper[4656]: I0128 16:15:28.789060 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:28 crc kubenswrapper[4656]: I0128 16:15:28.789662 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:28 crc kubenswrapper[4656]: I0128 16:15:28.842728 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:29 crc kubenswrapper[4656]: I0128 16:15:29.861681 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:29 crc kubenswrapper[4656]: I0128 16:15:29.909923 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:31 crc kubenswrapper[4656]: I0128 16:15:31.821148 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7mnj6" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="registry-server" containerID="cri-o://73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" gracePeriod=2 Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.273786 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.520075 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") pod \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.520221 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") pod \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.521436 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") pod \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.522401 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities" (OuterVolumeSpecName: "utilities") pod "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" (UID: "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.527496 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q" (OuterVolumeSpecName: "kube-api-access-t246q") pod "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" (UID: "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c"). InnerVolumeSpecName "kube-api-access-t246q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.577141 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" (UID: "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.623728 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.623768 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.623782 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831725 4656 generic.go:334] "Generic (PLEG): container finished" podID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerID="73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" exitCode=0 Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831799 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831809 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerDied","Data":"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890"} Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831864 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerDied","Data":"495b083fb6504c78ffde9f5b26f1d7ed8eed0987c7b2949f27fb691ff953481f"} Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831924 4656 scope.go:117] "RemoveContainer" containerID="73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.858023 4656 scope.go:117] "RemoveContainer" containerID="2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.882061 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.889558 4656 scope.go:117] "RemoveContainer" containerID="492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.890228 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.923576 4656 scope.go:117] "RemoveContainer" containerID="73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" Jan 28 16:15:32 crc kubenswrapper[4656]: E0128 16:15:32.924299 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890\": container with ID starting with 73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890 not found: ID does not exist" containerID="73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.924368 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890"} err="failed to get container status \"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890\": rpc error: code = NotFound desc = could not find container \"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890\": container with ID starting with 73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890 not found: ID does not exist" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.924409 4656 scope.go:117] "RemoveContainer" containerID="2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b" Jan 28 16:15:32 crc kubenswrapper[4656]: E0128 16:15:32.924707 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b\": container with ID starting with 2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b not found: ID does not exist" containerID="2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.924733 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b"} err="failed to get container status \"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b\": rpc error: code = NotFound desc = could not find container \"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b\": container with ID starting with 2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b not found: ID does not exist" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.924749 4656 scope.go:117] "RemoveContainer" containerID="492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d" Jan 28 16:15:32 crc kubenswrapper[4656]: E0128 16:15:32.925116 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d\": container with ID starting with 492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d not found: ID does not exist" containerID="492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.925304 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d"} err="failed to get container status \"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d\": rpc error: code = NotFound desc = could not find container \"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d\": container with ID starting with 492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d not found: ID does not exist" Jan 28 16:15:33 crc kubenswrapper[4656]: I0128 16:15:33.182739 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" path="/var/lib/kubelet/pods/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c/volumes" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.068544 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:34 crc kubenswrapper[4656]: E0128 16:15:34.069091 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="extract-content" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.069109 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="extract-content" Jan 28 16:15:34 crc kubenswrapper[4656]: E0128 16:15:34.069141 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="extract-utilities" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.069152 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="extract-utilities" Jan 28 16:15:34 crc kubenswrapper[4656]: E0128 16:15:34.069214 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="registry-server" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.069224 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="registry-server" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.069467 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="registry-server" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.071516 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.076773 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.143644 4656 scope.go:117] "RemoveContainer" containerID="e478d6329d394a8ee6946b81a0192102dc331c4b0fad2b32a9549e2af991b8fd" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.213066 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.213123 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.213279 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.314477 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.314586 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.314965 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.314986 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.315142 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.333301 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.443630 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.943818 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:34 crc kubenswrapper[4656]: W0128 16:15:34.947880 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6b0b613_5382_46d5_9f5a_b53999ea8ae8.slice/crio-fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658 WatchSource:0}: Error finding container fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658: Status 404 returned error can't find the container with id fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658 Jan 28 16:15:35 crc kubenswrapper[4656]: I0128 16:15:35.870117 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerID="b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963" exitCode=0 Jan 28 16:15:35 crc kubenswrapper[4656]: I0128 16:15:35.870210 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerDied","Data":"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963"} Jan 28 16:15:35 crc kubenswrapper[4656]: I0128 16:15:35.870485 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerStarted","Data":"fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658"} Jan 28 16:15:37 crc kubenswrapper[4656]: I0128 16:15:37.887957 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerID="7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5" exitCode=0 Jan 28 16:15:37 crc kubenswrapper[4656]: I0128 16:15:37.888037 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerDied","Data":"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5"} Jan 28 16:15:38 crc kubenswrapper[4656]: I0128 16:15:38.899208 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerStarted","Data":"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239"} Jan 28 16:15:38 crc kubenswrapper[4656]: I0128 16:15:38.920306 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-plxhp" podStartSLOduration=2.336732715 podStartE2EDuration="4.920278656s" podCreationTimestamp="2026-01-28 16:15:34 +0000 UTC" firstStartedPulling="2026-01-28 16:15:35.872207592 +0000 UTC m=+3426.380378396" lastFinishedPulling="2026-01-28 16:15:38.455753543 +0000 UTC m=+3428.963924337" observedRunningTime="2026-01-28 16:15:38.917908928 +0000 UTC m=+3429.426079732" watchObservedRunningTime="2026-01-28 16:15:38.920278656 +0000 UTC m=+3429.428449470" Jan 28 16:15:44 crc kubenswrapper[4656]: I0128 16:15:44.444874 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:44 crc kubenswrapper[4656]: I0128 16:15:44.445552 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:44 crc kubenswrapper[4656]: I0128 16:15:44.495624 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:44 crc kubenswrapper[4656]: I0128 16:15:44.982349 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:45 crc kubenswrapper[4656]: I0128 16:15:45.030762 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:46 crc kubenswrapper[4656]: I0128 16:15:46.956651 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-plxhp" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="registry-server" containerID="cri-o://62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" gracePeriod=2 Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.427552 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.602355 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") pod \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.602525 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") pod \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.602656 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") pod \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.603418 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities" (OuterVolumeSpecName: "utilities") pod "d6b0b613-5382-46d5-9f5a-b53999ea8ae8" (UID: "d6b0b613-5382-46d5-9f5a-b53999ea8ae8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.608430 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx" (OuterVolumeSpecName: "kube-api-access-4njxx") pod "d6b0b613-5382-46d5-9f5a-b53999ea8ae8" (UID: "d6b0b613-5382-46d5-9f5a-b53999ea8ae8"). InnerVolumeSpecName "kube-api-access-4njxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.630001 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6b0b613-5382-46d5-9f5a-b53999ea8ae8" (UID: "d6b0b613-5382-46d5-9f5a-b53999ea8ae8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.704207 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.704247 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.704262 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.964973 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerID="62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" exitCode=0 Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.965021 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerDied","Data":"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239"} Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.965031 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.965051 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerDied","Data":"fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658"} Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.965070 4656 scope.go:117] "RemoveContainer" containerID="62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.001065 4656 scope.go:117] "RemoveContainer" containerID="7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.006404 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.014298 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.017993 4656 scope.go:117] "RemoveContainer" containerID="b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.052698 4656 scope.go:117] "RemoveContainer" containerID="62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" Jan 28 16:15:48 crc kubenswrapper[4656]: E0128 16:15:48.053201 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239\": container with ID starting with 62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239 not found: ID does not exist" containerID="62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.053244 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239"} err="failed to get container status \"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239\": rpc error: code = NotFound desc = could not find container \"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239\": container with ID starting with 62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239 not found: ID does not exist" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.053274 4656 scope.go:117] "RemoveContainer" containerID="7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5" Jan 28 16:15:48 crc kubenswrapper[4656]: E0128 16:15:48.053655 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5\": container with ID starting with 7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5 not found: ID does not exist" containerID="7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.053696 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5"} err="failed to get container status \"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5\": rpc error: code = NotFound desc = could not find container \"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5\": container with ID starting with 7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5 not found: ID does not exist" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.053716 4656 scope.go:117] "RemoveContainer" containerID="b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963" Jan 28 16:15:48 crc kubenswrapper[4656]: E0128 16:15:48.054062 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963\": container with ID starting with b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963 not found: ID does not exist" containerID="b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.054094 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963"} err="failed to get container status \"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963\": rpc error: code = NotFound desc = could not find container \"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963\": container with ID starting with b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963 not found: ID does not exist" Jan 28 16:15:49 crc kubenswrapper[4656]: I0128 16:15:49.181264 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" path="/var/lib/kubelet/pods/d6b0b613-5382-46d5-9f5a-b53999ea8ae8/volumes" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.801182 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mbkx9"] Jan 28 16:16:03 crc kubenswrapper[4656]: E0128 16:16:03.804821 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="extract-utilities" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.804925 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="extract-utilities" Jan 28 16:16:03 crc kubenswrapper[4656]: E0128 16:16:03.805003 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="registry-server" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.805064 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="registry-server" Jan 28 16:16:03 crc kubenswrapper[4656]: E0128 16:16:03.805128 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="extract-content" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.805248 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="extract-content" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.805505 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="registry-server" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.807143 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.817700 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbkx9"] Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.887940 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-utilities\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.888121 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-catalog-content\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.888238 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vdbw\" (UniqueName: \"kubernetes.io/projected/b6054ffc-ba79-470b-9be7-f3791dc870f4-kube-api-access-4vdbw\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.989741 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-catalog-content\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.989838 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vdbw\" (UniqueName: \"kubernetes.io/projected/b6054ffc-ba79-470b-9be7-f3791dc870f4-kube-api-access-4vdbw\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.989918 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-utilities\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.990356 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-catalog-content\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.990369 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-utilities\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:04 crc kubenswrapper[4656]: I0128 16:16:04.009909 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vdbw\" (UniqueName: \"kubernetes.io/projected/b6054ffc-ba79-470b-9be7-f3791dc870f4-kube-api-access-4vdbw\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:04 crc kubenswrapper[4656]: I0128 16:16:04.139064 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:04 crc kubenswrapper[4656]: I0128 16:16:04.627026 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbkx9"] Jan 28 16:16:05 crc kubenswrapper[4656]: I0128 16:16:05.098675 4656 generic.go:334] "Generic (PLEG): container finished" podID="b6054ffc-ba79-470b-9be7-f3791dc870f4" containerID="ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7" exitCode=0 Jan 28 16:16:05 crc kubenswrapper[4656]: I0128 16:16:05.098712 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerDied","Data":"ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7"} Jan 28 16:16:05 crc kubenswrapper[4656]: I0128 16:16:05.098948 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerStarted","Data":"96dc734f0176253cf0403f2bc3250965b9d2b3d66d80de4e6081d7255f1d1741"} Jan 28 16:16:06 crc kubenswrapper[4656]: I0128 16:16:06.109351 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerStarted","Data":"93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e"} Jan 28 16:16:07 crc kubenswrapper[4656]: I0128 16:16:07.118952 4656 generic.go:334] "Generic (PLEG): container finished" podID="b6054ffc-ba79-470b-9be7-f3791dc870f4" containerID="93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e" exitCode=0 Jan 28 16:16:07 crc kubenswrapper[4656]: I0128 16:16:07.119028 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerDied","Data":"93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e"} Jan 28 16:16:09 crc kubenswrapper[4656]: I0128 16:16:09.142826 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerStarted","Data":"ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c"} Jan 28 16:16:09 crc kubenswrapper[4656]: I0128 16:16:09.163237 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mbkx9" podStartSLOduration=3.2678053719999998 podStartE2EDuration="6.163220941s" podCreationTimestamp="2026-01-28 16:16:03 +0000 UTC" firstStartedPulling="2026-01-28 16:16:05.101005225 +0000 UTC m=+3455.609176029" lastFinishedPulling="2026-01-28 16:16:07.996420794 +0000 UTC m=+3458.504591598" observedRunningTime="2026-01-28 16:16:09.16039286 +0000 UTC m=+3459.668563664" watchObservedRunningTime="2026-01-28 16:16:09.163220941 +0000 UTC m=+3459.671391745" Jan 28 16:16:11 crc kubenswrapper[4656]: I0128 16:16:11.263787 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:16:11 crc kubenswrapper[4656]: I0128 16:16:11.265359 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:16:14 crc kubenswrapper[4656]: I0128 16:16:14.139775 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:14 crc kubenswrapper[4656]: I0128 16:16:14.140370 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:14 crc kubenswrapper[4656]: I0128 16:16:14.187628 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:14 crc kubenswrapper[4656]: I0128 16:16:14.243037 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:14 crc kubenswrapper[4656]: I0128 16:16:14.427718 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbkx9"] Jan 28 16:16:16 crc kubenswrapper[4656]: I0128 16:16:16.197113 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mbkx9" podUID="b6054ffc-ba79-470b-9be7-f3791dc870f4" containerName="registry-server" containerID="cri-o://ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c" gracePeriod=2 Jan 28 16:16:16 crc kubenswrapper[4656]: I0128 16:16:16.597145 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:16 crc kubenswrapper[4656]: I0128 16:16:16.630881 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vdbw\" (UniqueName: \"kubernetes.io/projected/b6054ffc-ba79-470b-9be7-f3791dc870f4-kube-api-access-4vdbw\") pod \"b6054ffc-ba79-470b-9be7-f3791dc870f4\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " Jan 28 16:16:16 crc kubenswrapper[4656]: I0128 16:16:16.630978 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-catalog-content\") pod \"b6054ffc-ba79-470b-9be7-f3791dc870f4\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " Jan 28 16:16:16 crc kubenswrapper[4656]: I0128 16:16:16.632767 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-utilities" (OuterVolumeSpecName: "utilities") pod "b6054ffc-ba79-470b-9be7-f3791dc870f4" (UID: "b6054ffc-ba79-470b-9be7-f3791dc870f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:16:16 crc kubenswrapper[4656]: I0128 16:16:16.637024 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6054ffc-ba79-470b-9be7-f3791dc870f4-kube-api-access-4vdbw" (OuterVolumeSpecName: "kube-api-access-4vdbw") pod "b6054ffc-ba79-470b-9be7-f3791dc870f4" (UID: "b6054ffc-ba79-470b-9be7-f3791dc870f4"). InnerVolumeSpecName "kube-api-access-4vdbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:16:16 crc kubenswrapper[4656]: I0128 16:16:16.631104 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-utilities\") pod \"b6054ffc-ba79-470b-9be7-f3791dc870f4\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " Jan 28 16:16:16 crc kubenswrapper[4656]: I0128 16:16:16.648010 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:16:16 crc kubenswrapper[4656]: I0128 16:16:16.648039 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vdbw\" (UniqueName: \"kubernetes.io/projected/b6054ffc-ba79-470b-9be7-f3791dc870f4-kube-api-access-4vdbw\") on node \"crc\" DevicePath \"\"" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.216098 4656 generic.go:334] "Generic (PLEG): container finished" podID="b6054ffc-ba79-470b-9be7-f3791dc870f4" containerID="ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c" exitCode=0 Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.216154 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerDied","Data":"ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c"} Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.216202 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerDied","Data":"96dc734f0176253cf0403f2bc3250965b9d2b3d66d80de4e6081d7255f1d1741"} Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.216225 4656 scope.go:117] "RemoveContainer" containerID="ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.216421 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.262582 4656 scope.go:117] "RemoveContainer" containerID="93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.290447 4656 scope.go:117] "RemoveContainer" containerID="ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.328207 4656 scope.go:117] "RemoveContainer" containerID="ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c" Jan 28 16:16:17 crc kubenswrapper[4656]: E0128 16:16:17.329211 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c\": container with ID starting with ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c not found: ID does not exist" containerID="ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.329263 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c"} err="failed to get container status \"ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c\": rpc error: code = NotFound desc = could not find container \"ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c\": container with ID starting with ab37288dcb5daec75be2f8b8963752c050315c2ec79e3d56b9335cf9bf7f0d1c not found: ID does not exist" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.329299 4656 scope.go:117] "RemoveContainer" containerID="93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e" Jan 28 16:16:17 crc kubenswrapper[4656]: E0128 16:16:17.329815 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e\": container with ID starting with 93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e not found: ID does not exist" containerID="93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.329927 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e"} err="failed to get container status \"93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e\": rpc error: code = NotFound desc = could not find container \"93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e\": container with ID starting with 93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e not found: ID does not exist" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.329979 4656 scope.go:117] "RemoveContainer" containerID="ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7" Jan 28 16:16:17 crc kubenswrapper[4656]: E0128 16:16:17.341633 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7\": container with ID starting with ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7 not found: ID does not exist" containerID="ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.341701 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7"} err="failed to get container status \"ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7\": rpc error: code = NotFound desc = could not find container \"ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7\": container with ID starting with ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7 not found: ID does not exist" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.925406 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6054ffc-ba79-470b-9be7-f3791dc870f4" (UID: "b6054ffc-ba79-470b-9be7-f3791dc870f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:16:17 crc kubenswrapper[4656]: I0128 16:16:17.996252 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:16:18 crc kubenswrapper[4656]: I0128 16:16:18.149884 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbkx9"] Jan 28 16:16:18 crc kubenswrapper[4656]: I0128 16:16:18.158692 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mbkx9"] Jan 28 16:16:19 crc kubenswrapper[4656]: I0128 16:16:19.182747 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6054ffc-ba79-470b-9be7-f3791dc870f4" path="/var/lib/kubelet/pods/b6054ffc-ba79-470b-9be7-f3791dc870f4/volumes" Jan 28 16:16:41 crc kubenswrapper[4656]: I0128 16:16:41.264679 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:16:41 crc kubenswrapper[4656]: I0128 16:16:41.265330 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:17:11 crc kubenswrapper[4656]: I0128 16:17:11.263865 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:17:11 crc kubenswrapper[4656]: I0128 16:17:11.264572 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:17:11 crc kubenswrapper[4656]: I0128 16:17:11.264635 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 16:17:11 crc kubenswrapper[4656]: I0128 16:17:11.265368 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7999db764e2d77e39711d0b36b7eb67c612bc23b60021126111ecb7d71dcd8b"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:17:11 crc kubenswrapper[4656]: I0128 16:17:11.265434 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://d7999db764e2d77e39711d0b36b7eb67c612bc23b60021126111ecb7d71dcd8b" gracePeriod=600 Jan 28 16:17:11 crc kubenswrapper[4656]: I0128 16:17:11.800303 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="d7999db764e2d77e39711d0b36b7eb67c612bc23b60021126111ecb7d71dcd8b" exitCode=0 Jan 28 16:17:11 crc kubenswrapper[4656]: I0128 16:17:11.800472 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"d7999db764e2d77e39711d0b36b7eb67c612bc23b60021126111ecb7d71dcd8b"} Jan 28 16:17:11 crc kubenswrapper[4656]: I0128 16:17:11.800696 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"63e9c19c54f96e64945a487e8b89b3436215e7ea19208222f018cdf54c50b9f5"} Jan 28 16:17:11 crc kubenswrapper[4656]: I0128 16:17:11.800755 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136433267024457 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136433270017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136423636016516 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136423636015466 5ustar corecore